DPDK patches and discussions
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download: 
* [dpdk-dev] [PATCH v5 00/22] Add DLB PMD
  @ 2020-10-17 19:03  3% ` Timothy McDaniel
  0 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-17 19:03 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj

The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
  distribution.

The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo,  not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.

While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.

The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.

Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma

Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
  named probe API
- relocated enqueue logic to enqueue patch

Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes

Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
  I will address the remaining items identified by Mattias in the next
  patch delivery.
- General code cleanup based on internal code reviews

Depends-on: patch-79539 ("eal: add new x86 cpuid support for WAITPKG")

Timothy McDaniel (22):
  event/dlb: add documentation and meson infrastructure
  event/dlb: add dynamic logging
  event/dlb: add private data structures and constants
  event/dlb: add definitions shared with LKM or shared code
  event/dlb: add inline functions
  event/dlb: add probe
  event/dlb: add xstats
  event/dlb: add infos get and configure
  event/dlb: add queue and port default conf
  event/dlb: add queue setup
  event/dlb: add port setup
  event/dlb: add port link
  event/dlb: add port unlink and port unlinks in progress
  event/dlb: add eventdev start
  event/dlb: add enqueue and its burst variants
  event/dlb: add dequeue and its burst variants
  event/dlb: add eventdev stop and close
  event/dlb: add PMD's token pop public interface
  event/dlb: add PMD self-tests
  event/dlb: add queue and port release
  event/dlb: add timeout ticks entry point
  doc: Add new DLB eventdev driver to relnotes

 MAINTAINERS                                     |    5 +
 app/test/test_eventdev.c                        |    7 +
 config/rte_config.h                             |    8 +-
 doc/api/doxy-api-index.md                       |    1 +
 doc/guides/eventdevs/dlb.rst                    |  341 ++
 doc/guides/eventdevs/index.rst                  |    1 +
 doc/guides/rel_notes/release_20_11.rst          |    5 +
 drivers/event/dlb/dlb.c                         | 4129 ++++++++++++++
 drivers/event/dlb/dlb_iface.c                   |   79 +
 drivers/event/dlb/dlb_iface.h                   |   82 +
 drivers/event/dlb/dlb_inline_fns.h              |   79 +
 drivers/event/dlb/dlb_log.h                     |   25 +
 drivers/event/dlb/dlb_priv.h                    |  513 ++
 drivers/event/dlb/dlb_selftest.c                | 1551 +++++
 drivers/event/dlb/dlb_user.h                    |  814 +++
 drivers/event/dlb/dlb_xstats.c                  | 1222 ++++
 drivers/event/dlb/meson.build                   |   15 +
 drivers/event/dlb/pf/base/dlb_hw_types.h        |  334 ++
 drivers/event/dlb/pf/base/dlb_osdep.h           |  326 ++
 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h    |  441 ++
 drivers/event/dlb/pf/base/dlb_osdep_list.h      |  131 +
 drivers/event/dlb/pf/base/dlb_osdep_types.h     |   31 +
 drivers/event/dlb/pf/base/dlb_regs.h            | 2368 ++++++++
 drivers/event/dlb/pf/base/dlb_resource.c        | 6902 +++++++++++++++++++++++
 drivers/event/dlb/pf/base/dlb_resource.h        |  876 +++
 drivers/event/dlb/pf/dlb_main.c                 |  591 ++
 drivers/event/dlb/pf/dlb_main.h                 |   52 +
 drivers/event/dlb/pf/dlb_pf.c                   |  746 +++
 drivers/event/dlb/rte_pmd_dlb.c                 |   38 +
 drivers/event/dlb/rte_pmd_dlb.h                 |   72 +
 drivers/event/dlb/rte_pmd_dlb_event_version.map |    9 +
 drivers/event/meson.build                       |    4 +
 32 files changed, 21797 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/eventdevs/dlb.rst
 create mode 100644 drivers/event/dlb/dlb.c
 create mode 100644 drivers/event/dlb/dlb_iface.c
 create mode 100644 drivers/event/dlb/dlb_iface.h
 create mode 100644 drivers/event/dlb/dlb_inline_fns.h
 create mode 100644 drivers/event/dlb/dlb_log.h
 create mode 100644 drivers/event/dlb/dlb_priv.h
 create mode 100644 drivers/event/dlb/dlb_selftest.c
 create mode 100644 drivers/event/dlb/dlb_user.h
 create mode 100644 drivers/event/dlb/dlb_xstats.c
 create mode 100644 drivers/event/dlb/meson.build
 create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
 create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
 create mode 100644 drivers/event/dlb/pf/dlb_main.c
 create mode 100644 drivers/event/dlb/pf/dlb_main.h
 create mode 100644 drivers/event/dlb/pf/dlb_pf.c
 create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
 create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
 create mode 100644 drivers/event/dlb/rte_pmd_dlb_event_version.map

-- 
2.6.4


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
  2020-10-16 11:20  3%     ` Kinsella, Ray
@ 2020-10-16 17:13  0%       ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2020-10-16 17:13 UTC (permalink / raw)
  To: Kinsella, Ray, Andrew Rybchenko, Neil Horman, Thomas Monjalon,
	Ferruh Yigit
  Cc: dev, Ivan Ilchenko

On 10/16/20 2:20 PM, Kinsella, Ray wrote:
>
> On 15/10/2020 14:30, Andrew Rybchenko wrote:
>> From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
>>
>> Change rte_eth_dev_stop() return value from void to int
>> and return negative errno values in case of error conditions.
>> Also update the usage of the function in ethdev according to
>> the new return type.
>>
>> Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>> ---
>>   doc/guides/rel_notes/deprecation.rst   |  1 -
>>   doc/guides/rel_notes/release_20_11.rst |  3 +++
>>   lib/librte_ethdev/rte_ethdev.c         | 27 +++++++++++++++++++-------
>>   lib/librte_ethdev/rte_ethdev.h         |  5 ++++-
>>   4 files changed, 27 insertions(+), 9 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index d1f5ed39db..2e04e24374 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -127,7 +127,6 @@ Deprecation Notices
>>     negative errno values to indicate various error conditions (e.g.
>>     invalid port ID, unsupported operation, failed operation):
>>   
>> -  - ``rte_eth_dev_stop``
>>     - ``rte_eth_dev_close``
>>   
>>   * ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
>> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
>> index f8686a50db..c8c30937fa 100644
>> --- a/doc/guides/rel_notes/release_20_11.rst
>> +++ b/doc/guides/rel_notes/release_20_11.rst
>> @@ -355,6 +355,9 @@ API Changes
>>   * vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
>>     instead of ``rte_vhost_driver_start`` by crypto applications.
>>   
>> +* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
>> +  ``int`` to provide a way to report various error conditions.
>> +
>>   
>>   ABI Changes
>>   -----------
>> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
>> index d9b82df073..b8cf04ef4d 100644
>> --- a/lib/librte_ethdev/rte_ethdev.c
>> +++ b/lib/librte_ethdev/rte_ethdev.c
>> @@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
>>   	struct rte_eth_dev *dev;
>>   	struct rte_eth_dev_info dev_info;
>>   	int diag;
>> -	int ret;
>> +	int ret, ret_stop;
>>   
>>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
>>   
>> @@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
>>   		RTE_ETHDEV_LOG(ERR,
>>   			"Error during restoring configuration for device (port %u): %s\n",
>>   			port_id, rte_strerror(-ret));
>> -		rte_eth_dev_stop(port_id);
>> +		ret_stop = rte_eth_dev_stop(port_id);
>> +		if (ret_stop != 0) {
>> +			RTE_ETHDEV_LOG(ERR,
>> +				"Failed to stop device (port %u): %s\n",
>> +				port_id, rte_strerror(-ret_stop));
>> +		}
>> +
>>   		return ret;
>>   	}
>>   
>> @@ -1708,26 +1714,28 @@ rte_eth_dev_start(uint16_t port_id)
>>   	return 0;
>>   }
>>   
>> -void
>> +int
>>   rte_eth_dev_stop(uint16_t port_id)
>>   {
>>   	struct rte_eth_dev *dev;
>>   
>> -	RTE_ETH_VALID_PORTID_OR_RET(port_id);
>> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>   	dev = &rte_eth_devices[port_id];
>>   
>> -	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
>> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_stop, -ENOTSUP);
>>   
>>   	if (dev->data->dev_started == 0) {
>>   		RTE_ETHDEV_LOG(INFO,
>>   			"Device with port_id=%"PRIu16" already stopped\n",
>>   			port_id);
>> -		return;
>> +		return 0;
>>   	}
>>   
>>   	dev->data->dev_started = 0;
>>   	(*dev->dev_ops->dev_stop)(dev);
>>   	rte_ethdev_trace_stop(port_id);
>> +
>> +	return 0;
>>   }
>>   
>>   int
>> @@ -1783,7 +1791,12 @@ rte_eth_dev_reset(uint16_t port_id)
>>   
>>   	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
>>   
>> -	rte_eth_dev_stop(port_id);
>> +	ret = rte_eth_dev_stop(port_id);
>> +	if (ret != 0) {
>> +		RTE_ETHDEV_LOG(ERR,
>> +			"Failed to stop device (port %u) before reset: %s - ignore\n",
>> +			port_id, rte_strerror(-ret));
> ABI change is 100%,
> Just question the logic of continuing here to do a reset, if you failed to stop the device.

In the case of reset I'm sure that we should ignore stop failure here.
Typically reset is required to recover from bad state etc and stop
failure in such condition could definitely happen.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] ethdev: tunnel offload model
  @ 2020-10-16 15:41  3%     ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-16 15:41 UTC (permalink / raw)
  To: Gregory Etelson, dev
  Cc: matan, rasland, elibr, ozsh, asafp, Eli Britstein, Ori Kam,
	Viacheslav Ovsiienko, Neil Horman, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko



On 16/10/2020 13:51, Gregory Etelson wrote:
> From: Eli Britstein <elibr@mellanox.com>
> 
> rte_flow API provides the building blocks for vendor-agnostic flow
> classification offloads. The rte_flow "patterns" and "actions"
> primitives are fine-grained, thus enabling DPDK applications the
> flexibility to offload network stacks and complex pipelines.
> Applications wishing to offload tunneled traffic are required to use
> the rte_flow primitives, such as group, meta, mark, tag, and others to
> model their high-level objects.  The hardware model design for
> high-level software objects is not trivial.  Furthermore, an optimal
> design is often vendor-specific.
> 
> When hardware offloads tunneled traffic in multi-group logic,
> partially offloaded packets may arrive to the application after they
> were modified in hardware. In this case, the application may need to
> restore the original packet headers. Consider the following sequence:
> The application decaps a packet in one group and jumps to a second
> group where it tries to match on a 5-tuple, that will miss and send
> the packet to the application. In this case, the application does not
> receive the original packet but a modified one. Also, in this case,
> the application cannot match on the outer header fields, such as VXLAN
> vni and 5-tuple.
> 
> There are several possible ways to use rte_flow "patterns" and
> "actions" to resolve the issues above. For example:
> 1 Mapping headers to a hardware registers using the
> rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
> 2 Apply the decap only at the last offload stage after all the
> "patterns" were matched and the packet will be fully offloaded.
> Every approach has its pros and cons and is highly dependent on the
> hardware vendor.  For example, some hardware may have a limited number
> of registers while other hardware could not support inner actions and
> must decap before accessing inner headers.
> 
> The tunnel offload model resolves these issues. The model goals are:
> 1 Provide a unified application API to offload tunneled traffic that
> is capable to match on outer headers after decap.
> 2 Allow the application to restore the outer header of partially
> offloaded packets.
> 
> The tunnel offload model does not introduce new elements to the
> existing RTE flow model and is implemented as a set of helper
> functions.
> 
> For the application to work with the tunnel offload API it
> has to adjust flow rules in multi-table tunnel offload in the
> following way:
> 1 Remove explicit call to decap action and replace it with PMD actions
> obtained from rte_flow_tunnel_decap_and_set() helper.
> 2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
> other rules in the tunnel offload sequence.
> 
> VXLAN Code example:
> 
> Assume application needs to do inner NAT on the VXLAN packet.
> The first  rule in group 0:
> 
> flow create <port id> ingress group 0
>   pattern eth / ipv4 / udp dst is 4789 / vxlan / end
>   actions {pmd actions} / jump group 3 / end
> 
> The first VXLAN packet that arrives matches the rule in group 0 and
> jumps to group 3.  In group 3 the packet will miss since there is no
> flow to match and will be sent to the application.  Application  will
> call rte_flow_get_restore_info() to get the packet outer header.
> 
> Application will insert a new rule in group 3 to match outer and inner
> headers:
> 
> flow create <port id> ingress group 3
>   pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
>           udp dst 4789 / vxlan vni is 10 /
>           ipv4 dst is 184.1.2.3 / end
>   actions  set_ipv4_dst  186.1.1.1 / queue index 3 / end
> 
> Resulting of the rules will be that VXLAN packet with vni=10, outer
> IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
> decapped on queue 3 with IPv4 dst=186.1.1.1
> 
> Note: The packet in group 3 is considered decapped. All actions in
> that group will be done on the header that was inner before decap. The
> application may specify an outer header to be matched on.  It's PMD
> responsibility to translate these items to outer metadata.
> 
> API usage:
> 
> /**
>  * 1. Initiate RTE flow tunnel object
>  */
> const struct rte_flow_tunnel tunnel = {
>   .type = RTE_FLOW_ITEM_TYPE_VXLAN,
>   .tun_id = 10,
> }
> 
> /**
>  * 2. Obtain PMD tunnel actions
>  *
>  * pmd_actions is an intermediate variable application uses to
>  * compile actions array
>  */
> struct rte_flow_action **pmd_actions;
> rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
>                               &num_pmd_actions, &error);
> /**
>  * 3. offload the first  rule
>  * matching on VXLAN traffic and jumps to group 3
>  * (implicitly decaps packet)
>  */
> app_actions  =   jump group 3
> rule_items = app_items;  /** eth / ipv4 / udp / vxlan  */
> rule_actions = { pmd_actions, app_actions };
> attr.group = 0;
> flow_1 = rte_flow_create(port_id, &attr,
>                          rule_items, rule_actions, &error);
> 
> /**
>   * 4. after flow creation application does not need to keep the
>   * tunnel action resources.
>   */
> rte_flow_tunnel_action_release(port_id, pmd_actions,
>                                num_pmd_actions);
> /**
>   * 5. After partially offloaded packet miss because there was no
>   * matching rule handle miss on group 3
>   */
> struct rte_flow_restore_info info;
> rte_flow_get_restore_info(port_id, mbuf, &info, &error);
> 
> /**
>  * 6. Offload NAT rule:
>  */
> app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
>             vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
> app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }
> 
> rte_flow_tunnel_match(&info.tunnel, &pmd_items,
>                       &num_pmd_items,  &error);
> rule_items = {pmd_items, app_items};
> rule_actions = app_actions;
> attr.group = info.group_id;
> flow_2 = rte_flow_create(port_id, &attr,
>                          rule_items, rule_actions, &error);
> 
> /**
>  * 7. Release PMD items after rule creation
>  */
> rte_flow_tunnel_item_release(port_id,
>                              pmd_items, num_pmd_items);
> 
> References
> 1. https://mails.dpdk.org/archives/dev/2020-June/index.html
> 
> Signed-off-by: Eli Britstein <elibr@mellanox.com>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> 
> ---
> v5:
> * rebase to next-net
> 
> v6:
> * update the patch comment
> * update tunnel offload section in rte_flow.rst
> ---
>  doc/guides/prog_guide/rte_flow.rst       |  78 +++++++++
>  doc/guides/rel_notes/release_20_11.rst   |   5 +
>  lib/librte_ethdev/rte_ethdev_version.map |   5 +
>  lib/librte_ethdev/rte_flow.c             | 112 +++++++++++++
>  lib/librte_ethdev/rte_flow.h             | 195 +++++++++++++++++++++++
>  lib/librte_ethdev/rte_flow_driver.h      |  32 ++++
>  6 files changed, 427 insertions(+)
> 
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 7fb5ec9059..8dc048c6f4 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -3131,6 +3131,84 @@ operations include:
>  - Duplication of a complete flow rule description.
>  - Pattern item or action name retrieval.
>  
> +Tunneled traffic offload
> +~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +rte_flow API provides the building blocks for vendor-agnostic flow
> +classification offloads. The rte_flow "patterns" and "actions"
> +primitives are fine-grained, thus enabling DPDK applications the
> +flexibility to offload network stacks and complex pipelines.
> +Applications wishing to offload tunneled traffic are required to use
> +the rte_flow primitives, such as group, meta, mark, tag, and others to
> +model their high-level objects.  The hardware model design for
> +high-level software objects is not trivial.  Furthermore, an optimal
> +design is often vendor-specific.
> +
> +When hardware offloads tunneled traffic in multi-group logic,
> +partially offloaded packets may arrive to the application after they
> +were modified in hardware. In this case, the application may need to
> +restore the original packet headers. Consider the following sequence:
> +The application decaps a packet in one group and jumps to a second
> +group where it tries to match on a 5-tuple, that will miss and send
> +the packet to the application. In this case, the application does not
> +receive the original packet but a modified one. Also, in this case,
> +the application cannot match on the outer header fields, such as VXLAN
> +vni and 5-tuple.
> +
> +There are several possible ways to use rte_flow "patterns" and
> +"actions" to resolve the issues above. For example:
> +
> +1 Mapping headers to a hardware registers using the
> +rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
> +
> +2 Apply the decap only at the last offload stage after all the
> +"patterns" were matched and the packet will be fully offloaded.
> +
> +Every approach has its pros and cons and is highly dependent on the
> +hardware vendor.  For example, some hardware may have a limited number
> +of registers while other hardware could not support inner actions and
> +must decap before accessing inner headers.
> +
> +The tunnel offload model resolves these issues. The model goals are:
> +
> +1 Provide a unified application API to offload tunneled traffic that
> +is capable to match on outer headers after decap.
> +
> +2 Allow the application to restore the outer header of partially
> +offloaded packets.
> +
> +The tunnel offload model does not introduce new elements to the
> +existing RTE flow model and is implemented as a set of helper
> +functions.
> +
> +For the application to work with the tunnel offload API it
> +has to adjust flow rules in multi-table tunnel offload in the
> +following way:
> +
> +1 Remove explicit call to decap action and replace it with PMD actions
> +obtained from rte_flow_tunnel_decap_and_set() helper.
> +
> +2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
> +other rules in the tunnel offload sequence.
> +
> +The model requirements:
> +
> +Software application must initialize
> +rte_tunnel object with tunnel parameters before calling
> +rte_flow_tunnel_decap_set() & rte_flow_tunnel_match().
> +
> +PMD actions array obtained in rte_flow_tunnel_decap_set() must be
> +released by application with rte_flow_action_release() call.
> +
> +PMD items array obtained with rte_flow_tunnel_match() must be released

Should be rte_flow_tunnel_item_release ?

> +by application with rte_flow_item_release() call.  Application can
> +release PMD items and actions after rule was created. However, if the
> +application needs to create additional rule for the same tunnel it
> +will need to obtain PMD items again.
> +
> +Application cannot destroy rte_tunnel object before it releases all
> +PMD actions & PMD items referencing that tunnel.
> +
>  Caveats
>  -------
>  
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index 9155b468d6..f125ce79dd 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -121,6 +121,11 @@ New Features
>    * Flow rule verification was updated to accept private PMD
>      items and actions.
>  
> +* **Added generic API to offload tunneled traffic and restore missed packet.**
> +
> +  * Added a new hardware independent helper API to RTE flow library that
> +    offloads tunneled traffic and restores missed packets.
> +
>  * **Updated Cisco enic driver.**
>  
>    * Added support for VF representors with single-queue Tx/Rx and flow API
> diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
> index f64c379ac2..8ddda2547f 100644
> --- a/lib/librte_ethdev/rte_ethdev_version.map
> +++ b/lib/librte_ethdev/rte_ethdev_version.map
> @@ -239,6 +239,11 @@ EXPERIMENTAL {
>  	rte_flow_shared_action_destroy;
>  	rte_flow_shared_action_query;
>  	rte_flow_shared_action_update;
> +	rte_flow_tunnel_decap_set;
> +	rte_flow_tunnel_match;
> +	rte_flow_get_restore_info;
> +	rte_flow_tunnel_action_decap_release;
> +	rte_flow_tunnel_item_release;
>  };
>  
>  INTERNAL {
> diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
> index b74ea5593a..380c5cae2c 100644
> --- a/lib/librte_ethdev/rte_flow.c
> +++ b/lib/librte_ethdev/rte_flow.c
> @@ -1143,3 +1143,115 @@ rte_flow_shared_action_query(uint16_t port_id,
>  				       data, error);
>  	return flow_err(port_id, ret, error);
>  }
> +
> +int
> +rte_flow_tunnel_decap_set(uint16_t port_id,
> +			  struct rte_flow_tunnel *tunnel,
> +			  struct rte_flow_action **actions,
> +			  uint32_t *num_of_actions,
> +			  struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->tunnel_decap_set)) {
> +		return flow_err(port_id,
> +				ops->tunnel_decap_set(dev, tunnel, actions,
> +						      num_of_actions, error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_tunnel_match(uint16_t port_id,
> +		      struct rte_flow_tunnel *tunnel,
> +		      struct rte_flow_item **items,
> +		      uint32_t *num_of_items,
> +		      struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->tunnel_match)) {
> +		return flow_err(port_id,
> +				ops->tunnel_match(dev, tunnel, items,
> +						  num_of_items, error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_get_restore_info(uint16_t port_id,
> +			  struct rte_mbuf *m,
> +			  struct rte_flow_restore_info *restore_info,
> +			  struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->get_restore_info)) {
> +		return flow_err(port_id,
> +				ops->get_restore_info(dev, m, restore_info,
> +						      error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_tunnel_action_decap_release(uint16_t port_id,
> +				     struct rte_flow_action *actions,
> +				     uint32_t num_of_actions,
> +				     struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->action_release)) {
> +		return flow_err(port_id,
> +				ops->action_release(dev, actions,
> +						    num_of_actions, error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_tunnel_item_release(uint16_t port_id,
> +			     struct rte_flow_item *items,
> +			     uint32_t num_of_items,
> +			     struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->item_release)) {
> +		return flow_err(port_id,
> +				ops->item_release(dev, items,
> +						  num_of_items, error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 48395284b5..a8eac4deb8 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -3620,6 +3620,201 @@ rte_flow_shared_action_query(uint16_t port_id,
>  			     void *data,
>  			     struct rte_flow_error *error);
>  
> +/* Tunnel has a type and the key information. */
> +struct rte_flow_tunnel {
> +	/**
> +	 * Tunnel type, for example RTE_FLOW_ITEM_TYPE_VXLAN,
> +	 * RTE_FLOW_ITEM_TYPE_NVGRE etc.
> +	 */
> +	enum rte_flow_item_type	type;
> +	uint64_t tun_id; /**< Tunnel identification. */
> +
> +	RTE_STD_C11
> +	union {
> +		struct {
> +			rte_be32_t src_addr; /**< IPv4 source address. */
> +			rte_be32_t dst_addr; /**< IPv4 destination address. */
> +		} ipv4;
> +		struct {
> +			uint8_t src_addr[16]; /**< IPv6 source address. */
> +			uint8_t dst_addr[16]; /**< IPv6 destination address. */
> +		} ipv6;
> +	};
> +	rte_be16_t tp_src; /**< Tunnel port source. */
> +	rte_be16_t tp_dst; /**< Tunnel port destination. */
> +	uint16_t   tun_flags; /**< Tunnel flags. */
> +
> +	bool       is_ipv6; /**< True for valid IPv6 fields. Otherwise IPv4. */
> +
> +	/**
> +	 * the following members are required to restore packet
> +	 * after miss
> +	 */
> +	uint8_t    tos; /**< TOS for IPv4, TC for IPv6. */
> +	uint8_t    ttl; /**< TTL for IPv4, HL for IPv6. */
> +	uint32_t label; /**< Flow Label for IPv6. */
> +};
> +
> +/**
> + * Indicate that the packet has a tunnel.
> + */
> +#define RTE_FLOW_RESTORE_INFO_TUNNEL  (1ULL << 0)
> +
> +/**
> + * Indicate that the packet has a non decapsulated tunnel header.
> + */
> +#define RTE_FLOW_RESTORE_INFO_ENCAPSULATED  (1ULL << 1)
> +
> +/**
> + * Indicate that the packet has a group_id.
> + */
> +#define RTE_FLOW_RESTORE_INFO_GROUP_ID  (1ULL << 2)
> +
> +/**
> + * Restore information structure to communicate the current packet processing
> + * state when some of the processing pipeline is done in hardware and should
> + * continue in software.
> + */
> +struct rte_flow_restore_info {
> +	/**
> +	 * Bitwise flags (RTE_FLOW_RESTORE_INFO_*) to indicate validation of
> +	 * other fields in struct rte_flow_restore_info.
> +	 */
> +	uint64_t flags;
> +	uint32_t group_id; /**< Group ID where packed missed */
> +	struct rte_flow_tunnel tunnel; /**< Tunnel information. */
> +};
> +
> +/**
> + * Allocate an array of actions to be used in rte_flow_create, to implement
> + * tunnel-decap-set for the given tunnel.
> + * Sample usage:
> + *   actions vxlan_decap / tunnel-decap-set(tunnel properties) /
> + *            jump group 0 / end
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] tunnel
> + *   Tunnel properties.
> + * @param[out] actions
> + *   Array of actions to be allocated by the PMD. This array should be
> + *   concatenated with the actions array provided to rte_flow_create.
> + * @param[out] num_of_actions
> + *   Number of actions allocated.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL. PMDs initialize this
> + *   structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_tunnel_decap_set(uint16_t port_id,
> +			  struct rte_flow_tunnel *tunnel,
> +			  struct rte_flow_action **actions,
> +			  uint32_t *num_of_actions,
> +			  struct rte_flow_error *error);
> +
> +/**
> + * Allocate an array of items to be used in rte_flow_create, to implement
> + * tunnel-match for the given tunnel.
> + * Sample usage:
> + *   pattern tunnel-match(tunnel properties) / outer-header-matches /
> + *           inner-header-matches / end
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] tunnel
> + *   Tunnel properties.
> + * @param[out] items
> + *   Array of items to be allocated by the PMD. This array should be
> + *   concatenated with the items array provided to rte_flow_create.
> + * @param[out] num_of_items
> + *   Number of items allocated.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL. PMDs initialize this
> + *   structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_tunnel_match(uint16_t port_id,
> +		      struct rte_flow_tunnel *tunnel,
> +		      struct rte_flow_item **items,
> +		      uint32_t *num_of_items,
> +		      struct rte_flow_error *error);
> +
> +/**
> + * Populate the current packet processing state, if exists, for the given mbuf.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] m
> + *   Mbuf struct.
> + * @param[out] info
> + *   Restore information. Upon success contains the HW state.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL. PMDs initialize this
> + *   structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_get_restore_info(uint16_t port_id,
> +			  struct rte_mbuf *m,
> +			  struct rte_flow_restore_info *info,
> +			  struct rte_flow_error *error);
> +
> +/**
> + * Release the action array as allocated by rte_flow_tunnel_decap_set.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] actions
> + *   Array of actions to be released.
> + * @param[in] num_of_actions
> + *   Number of elements in actions array.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL. PMDs initialize this
> + *   structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_tunnel_action_decap_release(uint16_t port_id,
> +				     struct rte_flow_action *actions,
> +				     uint32_t num_of_actions,
> +				     struct rte_flow_error *error);
> +
> +/**
> + * Release the item array as allocated by rte_flow_tunnel_match.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] items
> + *   Array of items to be released.
> + * @param[in] num_of_items
> + *   Number of elements in item array.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL. PMDs initialize this
> + *   structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_tunnel_item_release(uint16_t port_id,
> +			     struct rte_flow_item *items,
> +			     uint32_t num_of_items,
> +			     struct rte_flow_error *error);
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_ethdev/rte_flow_driver.h b/lib/librte_ethdev/rte_flow_driver.h
> index 58f56b0262..bd5ffc0bb1 100644
> --- a/lib/librte_ethdev/rte_flow_driver.h
> +++ b/lib/librte_ethdev/rte_flow_driver.h
> @@ -131,6 +131,38 @@ struct rte_flow_ops {
>  		 const struct rte_flow_shared_action *shared_action,
>  		 void *data,
>  		 struct rte_flow_error *error);
> +	/** See rte_flow_tunnel_decap_set() */
> +	int (*tunnel_decap_set)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_flow_tunnel *tunnel,
> +		 struct rte_flow_action **pmd_actions,
> +		 uint32_t *num_of_actions,
> +		 struct rte_flow_error *err);
> +	/** See rte_flow_tunnel_match() */
> +	int (*tunnel_match)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_flow_tunnel *tunnel,
> +		 struct rte_flow_item **pmd_items,
> +		 uint32_t *num_of_items,
> +		 struct rte_flow_error *err);

Should be rte_flow_get_restore_info

> +	/** See rte_flow_get_rte_flow_restore_info() */> +	int (*get_restore_info)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_mbuf *m,
> +		 struct rte_flow_restore_info *info,
> +		 struct rte_flow_error *err);

Should be rte_flow_tunnel_action_decap_release
> +	/** See rte_flow_action_tunnel_decap_release() */
> +	int (*action_release)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_flow_action *pmd_actions,
> +		 uint32_t num_of_actions,
> +		 struct rte_flow_error *err);

Should rte_flow_tunnel_item_release?
> +	/** See rte_flow_item_release() */
> +	int (*item_release)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_flow_item *pmd_items,
> +		 uint32_t num_of_items,
> +		 struct rte_flow_error *err);
>  };
>  
>  /**
> 

ABI Changes Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v9 1/6] ethdev: introduce Rx buffer split
  2020-10-16 11:21  4%     ` Ferruh Yigit
@ 2020-10-16 13:08  0%       ` Slava Ovsiienko
  0 siblings, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-10-16 13:08 UTC (permalink / raw)
  To: Ferruh Yigit, dev
  Cc: NBU-Contact-Thomas Monjalon, stephen, olivier.matz, jerinjacobk,
	maxime.coquelin, david.marchand, arybchenko

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, October 16, 2020 14:21
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> stephen@networkplumber.org; olivier.matz@6wind.com;
> jerinjacobk@gmail.com; maxime.coquelin@redhat.com;
> david.marchand@redhat.com; arybchenko@solarflare.com
> Subject: Re: [PATCH v9 1/6] ethdev: introduce Rx buffer split
> 
> On 10/16/2020 11:22 AM, Viacheslav Ovsiienko wrote:
> > The DPDK datapath in the transmit direction is very flexible.
> > An application can build the multi-segment packet and manages almost
> > all data aspects - the memory pools where segments are allocated from,
> > the segment lengths, the memory attributes like external buffers,
> > registered for DMA, etc.
> >
[snip]
> > +
> > +* **[uses]       rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> > +* **[uses]       rte_eth_rxconf**: ``rx_conf.rx_seg, rx_conf.rx_nseg``.
> > +* **[implements] datapath**: ``Buffer Split functionality``.
> > +* **[provides]   rte_eth_dev_info**:
> ``rx_offload_capa:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> > +* **[provides]   eth_dev_ops**: ``rxq_info_get:buffer_split``.
> 
> Previously you mentioned this is because 'rxq_info_get()' can provide
> buffer_split information, but with current implementation it doesn't and there
> is no filed in the struct to report such.
> 
> I suggest either add it now, while you can :) [with a techboard approval], or
> remove above documentation of it.
> 
> <...>
> 

Mmm, I messed up with rx_burst_mode_get(). Will fix, thanks.


> >   /**
> > + * Ethernet device Rx buffer segmentation capabilities.
> > + */
> > +__rte_experimental
> > +struct rte_eth_rxseg_capa {
> > +	__extension__
> > +	uint32_t max_nseg:16; /**< Maximum amount of segments to split. */
> > +	uint32_t multi_pools:1; /**< Supports receiving to multiple pools.*/
> > +	uint32_t offset_allowed:1; /**< Supports buffer offsets. */
> > +	uint32_t offset_align_log2:4; /**< Required offset alignment. */ };
> 
> Now we are fiddling details, but,
> 
> I am not fun of the bitfields [1], but I assumed Thomas' request was to enable
> expanding capabilities later without breaking the ABI, which makes senses and
> suits to this kind of capability struct, if this is correct why made the 'max_nseg'
> a bitfield too?
> 
> Why not,
> uint16_t max_nseg;
> uint16_t multi_pools:1
> uint16_t offset_allowed:1;
> uint16_t offset_align_log2:4;
> < This still leaves 10 bits to expand without ABI break>
> 
> [1]
> unles very space critical use case, otherwise they just add more code to extract
> the same value, and not as simple as a simple variable :)

It seems not to be the case,  there is the listing of the rte_eth_rx_queue_check_split():

 8963 4b67 440FB784      movzwl 188(%rsp),%r8d  ; [SO] max_nseg is fetched as regular uint16_t
 8963      24BC0000
 8963      00
 8964 4b70 664539C1      cmpw %r8w,%r9w
 8965 4b74 0F87A402      ja .L1749
 8965      0000

I would prefer to keep uint32_t - it is more generic, IMO.

With best regards, Slava


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v9 1/6] ethdev: introduce Rx buffer split
  @ 2020-10-16 11:21  4%     ` Ferruh Yigit
  2020-10-16 13:08  0%       ` Slava Ovsiienko
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-10-16 11:21 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, dev
  Cc: thomas, stephen, olivier.matz, jerinjacobk, maxime.coquelin,
	david.marchand, arybchenko

On 10/16/2020 11:22 AM, Viacheslav Ovsiienko wrote:
> The DPDK datapath in the transmit direction is very flexible.
> An application can build the multi-segment packet and manages
> almost all data aspects - the memory pools where segments
> are allocated from, the segment lengths, the memory attributes
> like external buffers, registered for DMA, etc.
> 
> In the receiving direction, the datapath is much less flexible,
> an application can only specify the memory pool to configure the
> receiving queue and nothing more. In order to extend receiving
> datapath capabilities it is proposed to add the way to provide
> extended information how to split the packets being received.
> 
> The new offload flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT in device
> capabilities is introduced to present the way for PMD to report to
> application about supporting Rx packet split to configurable
> segments. Prior invoking the rte_eth_rx_queue_setup() routine
> application should check RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT flag.
> 
> The following structure is introduced to specify the Rx packet
> segment for RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT offload:
> 
> struct rte_eth_rxseg_split {
> 
>      struct rte_mempool *mp; /* memory pools to allocate segment from */
>      uint16_t length; /* segment maximal data length,
> 		       	configures "split point" */
>      uint16_t offset; /* data offset from beginning
> 		       	of mbuf data buffer */
>      uint32_t reserved; /* reserved field */
> };
> 
> The segment descriptions are added to the rte_eth_rxconf structure:
>     rx_seg - pointer the array of segment descriptions, each element
>               describes the memory pool, maximal data length, initial
>               data offset from the beginning of data buffer in mbuf.
> 	     This array allows to specify the different settings for
> 	     each segment in individual fashion.
>     rx_nseg - number of elements in the array
> 
> If the extended segment descriptions is provided with these new
> fields the mp parameter of the rte_eth_rx_queue_setup must be
> specified as NULL to avoid ambiguity.
> 
> There are two options to specify Rx buffer configuration:
> - mp is not NULL, rx_conf.rx_seg is NULL, rx_conf.rx_nseg is zero,
>    it is compatible configuration, follows existing implementation,
>    provides single pool and no description for segment sizes
>    and offsets.
> - mp is NULL, rx_conf.rx_seg is not NULL, rx_conf.rx_nseg is not
>    zero, it provides the extended configuration, individually for
>    each segment.
> 
> f the Rx queue is configured with new settings the packets being
> received will be split into multiple segments pushed to the mbufs
> with specified attributes. The PMD will split the received packets
> into multiple segments according to the specification in the
> description array.
> 
> For example, let's suppose we configured the Rx queue with the
> following segments:
>      seg0 - pool0, len0=14B, off0=2
>      seg1 - pool1, len1=20B, off1=128B
>      seg2 - pool2, len2=20B, off2=0B
>      seg3 - pool3, len3=512B, off3=0B
> 
> The packet 46 bytes long will look like the following:
>      seg0 - 14B long @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
>      seg1 - 20B long @ 128 in mbuf from pool1
>      seg2 - 12B long @ 0 in mbuf from pool2
> 
> The packet 1500 bytes long will look like the following:
>      seg0 - 14B @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
>      seg1 - 20B @ 128 in mbuf from pool1
>      seg2 - 20B @ 0 in mbuf from pool2
>      seg3 - 512B @ 0 in mbuf from pool3
>      seg4 - 512B @ 0 in mbuf from pool3
>      seg5 - 422B @ 0 in mbuf from pool3
> 
> The offload RTE_ETH_RX_OFFLOAD_SCATTER must be present and
> configured to support new buffer split feature (if rx_nseg
> is greater than one).
> 
> The split limitations imposed by underlying PMD is reported
> in the new introduced rte_eth_dev_info->rx_seg_capa field.
> 
> The new approach would allow splitting the ingress packets into
> multiple parts pushed to the memory with different attributes.
> For example, the packet headers can be pushed to the embedded
> data buffers within mbufs and the application data into
> the external buffers attached to mbufs allocated from the
> different memory pools. The memory attributes for the split
> parts may differ either - for example the application data
> may be pushed into the external memory located on the dedicated
> physical device, say GPU or NVMe. This would improve the DPDK
> receiving datapath flexibility with preserving compatibility
> with existing API.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

<...>

> +.. _nic_features_buffer_split:
> +
> +Buffer Split on Rx
> +------------------
> +
> +Scatters the packets being received on specified boundaries to segmented mbufs.
> +
> +* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> +* **[uses]       rte_eth_rxconf**: ``rx_conf.rx_seg, rx_conf.rx_nseg``.
> +* **[implements] datapath**: ``Buffer Split functionality``.
> +* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> +* **[provides]   eth_dev_ops**: ``rxq_info_get:buffer_split``.

Previously you mentioned this is because 'rxq_info_get()' can provide 
buffer_split information, but with current implementation it doesn't and there 
is no filed in the struct to report such.

I suggest either add it now, while you can :) [with a techboard approval], or 
remove above documentation of it.

<...>

>   /**
> + * Ethernet device Rx buffer segmentation capabilities.
> + */
> +__rte_experimental
> +struct rte_eth_rxseg_capa {
> +	__extension__
> +	uint32_t max_nseg:16; /**< Maximum amount of segments to split. */
> +	uint32_t multi_pools:1; /**< Supports receiving to multiple pools.*/
> +	uint32_t offset_allowed:1; /**< Supports buffer offsets. */
> +	uint32_t offset_align_log2:4; /**< Required offset alignment. */
> +};

Now we are fiddling details, but,

I am not fun of the bitfields [1], but I assumed Thomas' request was to enable 
expanding capabilities later without breaking the ABI, which makes senses and 
suits to this kind of capability struct, if this is correct why made the 
'max_nseg' a bitfield too?

Why not,
uint16_t max_nseg;
uint16_t multi_pools:1
uint16_t offset_allowed:1;
uint16_t offset_align_log2:4;
< This still leaves 10 bits to expand without ABI break>

[1]
unles very space critical use case, otherwise they just add more code to extract 
the same value, and not as simple as a simple variable :)

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
  2020-10-15 13:30  4%   ` [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int Andrew Rybchenko
  2020-10-16  9:22  0%     ` Ferruh Yigit
@ 2020-10-16 11:20  3%     ` Kinsella, Ray
  2020-10-16 17:13  0%       ` Andrew Rybchenko
  1 sibling, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-10-16 11:20 UTC (permalink / raw)
  To: Andrew Rybchenko, Neil Horman, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko
  Cc: dev, Ivan Ilchenko



On 15/10/2020 14:30, Andrew Rybchenko wrote:
> From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
> 
> Change rte_eth_dev_stop() return value from void to int
> and return negative errno values in case of error conditions.
> Also update the usage of the function in ethdev according to
> the new return type.
> 
> Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>  doc/guides/rel_notes/deprecation.rst   |  1 -
>  doc/guides/rel_notes/release_20_11.rst |  3 +++
>  lib/librte_ethdev/rte_ethdev.c         | 27 +++++++++++++++++++-------
>  lib/librte_ethdev/rte_ethdev.h         |  5 ++++-
>  4 files changed, 27 insertions(+), 9 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index d1f5ed39db..2e04e24374 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -127,7 +127,6 @@ Deprecation Notices
>    negative errno values to indicate various error conditions (e.g.
>    invalid port ID, unsupported operation, failed operation):
>  
> -  - ``rte_eth_dev_stop``
>    - ``rte_eth_dev_close``
>  
>  * ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index f8686a50db..c8c30937fa 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -355,6 +355,9 @@ API Changes
>  * vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
>    instead of ``rte_vhost_driver_start`` by crypto applications.
>  
> +* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
> +  ``int`` to provide a way to report various error conditions.
> +
>  
>  ABI Changes
>  -----------
> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
> index d9b82df073..b8cf04ef4d 100644
> --- a/lib/librte_ethdev/rte_ethdev.c
> +++ b/lib/librte_ethdev/rte_ethdev.c
> @@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
>  	struct rte_eth_dev *dev;
>  	struct rte_eth_dev_info dev_info;
>  	int diag;
> -	int ret;
> +	int ret, ret_stop;
>  
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
>  
> @@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
>  		RTE_ETHDEV_LOG(ERR,
>  			"Error during restoring configuration for device (port %u): %s\n",
>  			port_id, rte_strerror(-ret));
> -		rte_eth_dev_stop(port_id);
> +		ret_stop = rte_eth_dev_stop(port_id);
> +		if (ret_stop != 0) {
> +			RTE_ETHDEV_LOG(ERR,
> +				"Failed to stop device (port %u): %s\n",
> +				port_id, rte_strerror(-ret_stop));
> +		}
> +
>  		return ret;
>  	}
>  
> @@ -1708,26 +1714,28 @@ rte_eth_dev_start(uint16_t port_id)
>  	return 0;
>  }
>  
> -void
> +int
>  rte_eth_dev_stop(uint16_t port_id)
>  {
>  	struct rte_eth_dev *dev;
>  
> -	RTE_ETH_VALID_PORTID_OR_RET(port_id);
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>  	dev = &rte_eth_devices[port_id];
>  
> -	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_stop, -ENOTSUP);
>  
>  	if (dev->data->dev_started == 0) {
>  		RTE_ETHDEV_LOG(INFO,
>  			"Device with port_id=%"PRIu16" already stopped\n",
>  			port_id);
> -		return;
> +		return 0;
>  	}
>  
>  	dev->data->dev_started = 0;
>  	(*dev->dev_ops->dev_stop)(dev);
>  	rte_ethdev_trace_stop(port_id);
> +
> +	return 0;
>  }
>  
>  int
> @@ -1783,7 +1791,12 @@ rte_eth_dev_reset(uint16_t port_id)
>  
>  	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
>  
> -	rte_eth_dev_stop(port_id);
> +	ret = rte_eth_dev_stop(port_id);
> +	if (ret != 0) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Failed to stop device (port %u) before reset: %s - ignore\n",
> +			port_id, rte_strerror(-ret));

ABI change is 100%,
Just question the logic of continuing here to do a reset, if you failed to stop the device.


> +	}
>  	ret = dev->dev_ops->dev_reset(dev);
>  
>  	return eth_err(port_id, ret);
> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
> index a61ca115a0..b85861cf2b 100644
> --- a/lib/librte_ethdev/rte_ethdev.h
> +++ b/lib/librte_ethdev/rte_ethdev.h
> @@ -2277,8 +2277,11 @@ int rte_eth_dev_start(uint16_t port_id);
>   *
>   * @param port_id
>   *   The port identifier of the Ethernet device.
> + * @return
> + *   - 0: Success, Ethernet device stopped.
> + *   - <0: Error code of the driver device stop function.
>   */
> -void rte_eth_dev_stop(uint16_t port_id);
> +int rte_eth_dev_stop(uint16_t port_id);
>  
>  /**
>   * Link up an Ethernet device.
> 

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
  2020-10-15 13:30  4%   ` [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int Andrew Rybchenko
@ 2020-10-16  9:22  0%     ` Ferruh Yigit
  2020-10-16 11:20  3%     ` Kinsella, Ray
  1 sibling, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-10-16  9:22 UTC (permalink / raw)
  To: Andrew Rybchenko, Ray Kinsella, Neil Horman, Thomas Monjalon,
	Andrew Rybchenko
  Cc: dev, Ivan Ilchenko

On 10/15/2020 2:30 PM, Andrew Rybchenko wrote:
> From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
> 
> Change rte_eth_dev_stop() return value from void to int
> and return negative errno values in case of error conditions.
> Also update the usage of the function in ethdev according to
> the new return type.
> 
> Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>

<...>

> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index f8686a50db..c8c30937fa 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -355,6 +355,9 @@ API Changes
>   * vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
>     instead of ``rte_vhost_driver_start`` by crypto applications.
>   
> +* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
> +  ``int`` to provide a way to report various error conditions.
> +
>  

If there will be a new version, there is a ethdev block already in this section 
can you please move the paragraph up there?

>   ABI Changes
>   -----------
> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
> index d9b82df073..b8cf04ef4d 100644
> --- a/lib/librte_ethdev/rte_ethdev.c
> +++ b/lib/librte_ethdev/rte_ethdev.c
> @@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
>   	struct rte_eth_dev *dev;
>   	struct rte_eth_dev_info dev_info;
>   	int diag;
> -	int ret;
> +	int ret, ret_stop;
>   
>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
>   
> @@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
>   		RTE_ETHDEV_LOG(ERR,
>   			"Error during restoring configuration for device (port %u): %s\n",
>   			port_id, rte_strerror(-ret));
> -		rte_eth_dev_stop(port_id);
> +		ret_stop = rte_eth_dev_stop(port_id);
> +		if (ret_stop != 0) {
> +			RTE_ETHDEV_LOG(ERR,
> +				"Failed to stop device (port %u): %s\n",
> +				port_id, rte_strerror(-ret_stop));
> +		}
> +


Again, if there will be a new version already,
This is the 'rte_eth_dev_start()' function and error log is "Failed to stop 
device .." :)
What do you think about adding a little more detail, like "failed to stop back 
on error" etc...

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] performance degradation with fpic
  @ 2020-10-16  8:35  3%       ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-10-16  8:35 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Thomas Monjalon, Ali Alnubani, dev, Asaf Penso, david.marchand,
	arybchenko, ferruh.yigit, honnappa.nagarahalli, jerinj

On Thu, Oct 15, 2020 at 02:44:49PM -0700, Stephen Hemminger wrote:
> On Thu, 15 Oct 2020 19:14:48 +0200
> Thomas Monjalon <thomas@monjalon.net> wrote:
> 
> > 15/10/2020 19:08, Bruce Richardson:
> > > On Thu, Oct 15, 2020 at 04:00:44PM +0000, Ali Alnubani wrote:  
> > > >    We have been seeing in some cases that the DPDK forwarding performance
> > > >    is up to 9% lower when DPDK is built as static with meson compared to a
> > > >    build with makefiles.
> > > > 
> > > >    The same degradation can be reproduced with makefiles on older DPDK
> > > >    releases when building with EXTAR_CFLAGS set to “-fPIC”, it can also be
> > > >    resolved in meson when passing “pic: false” to meson’s static_library
> > > >    call (more tweaking needs to be done to prevent building shared
> > > >    libraries because this change breaks them).  
> > [...]
> > > >    Should we disable PIC in static builds?  
> > > 
> > > thanks for reporting, though it's strange that you see such a big impact.
> > > In my previous tests with i40e driver I never noticed a difference between
> > > make and meson builds, and I and some others here have been using meson
> > > builds for any performance work for over a year now. That being said let me
> > > reverify what I see on my end.
> > > 
> > > In terms of solutions, disabling the -fPIC flag globally implies that we
> > > can no longer build static and shared libs from the same sources, so we
> > > would need to revert to doing either a static or a shared library build
> > > but not both. If the issue is limited to only some drivers or some cases,
> > > we can perhaps add in a build option to have no-fpic-static builds, to be
> > > used in a cases where it is problematic.  
> > 
> > I assume only some Rx/Tx functions are impacted.
> > We probably need such disabling option per-file.
> > 
> > > However, at this point, I think we need a little more investigation. Is
> > > there any testing you can do to see if it's just in your driver, or in
> > > perhaps a mempool driver/lib that the issue appears, or if it's just a
> > > global slowdown? Do you see the impact with both clang and gcc?  I'll
> > > retest things a bit tomorrow on my end to see what I see.  
> > 
> > Yes we need to know which libs or files are impacted by -fPIC.
> 
> The issue is that all shared libraries need to be built with PIC.
> So it is a question of static vs shared library build.

Well, partially yes, but really using fPIC should only have a very small
difference in drivers. Therefore I'd like to know what's causing this
massive drop because, while disabling fPIC in the static builds (perhaps
per-component to avoid doubling the build time) will improve perf in the
static case, it will still leave a perf drop when a user switches to shared
libs. Since we want to move to a model where people are using shared
libraries and can update seamlessly due to constant ABI, I therefore think
we need to root cause this so we can fix the shared lib builds too - since
disabling fPIC is not an option there.

/Bruce

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 03/20] eal: rename lcore word choices
  @ 2020-10-15 22:57  1%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-15 22:57 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Anatoly Burakov

Replace master lcore with main lcore and
replace slave lcore with worker lcore.

Keep the old functions and macros but mark them as deprecated
for this release.

The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.

Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/rel_notes/deprecation.rst       | 19 -------
 doc/guides/rel_notes/release_20_11.rst     | 11 ++++
 lib/librte_eal/common/eal_common_dynmem.c  | 10 ++--
 lib/librte_eal/common/eal_common_launch.c  | 36 ++++++------
 lib/librte_eal/common/eal_common_lcore.c   |  8 +--
 lib/librte_eal/common/eal_common_options.c | 64 ++++++++++++----------
 lib/librte_eal/common/eal_options.h        |  2 +
 lib/librte_eal/common/eal_private.h        |  6 +-
 lib/librte_eal/common/rte_random.c         |  2 +-
 lib/librte_eal/common/rte_service.c        |  2 +-
 lib/librte_eal/freebsd/eal.c               | 28 +++++-----
 lib/librte_eal/freebsd/eal_thread.c        | 32 +++++------
 lib/librte_eal/include/rte_eal.h           |  4 +-
 lib/librte_eal/include/rte_eal_trace.h     |  4 +-
 lib/librte_eal/include/rte_launch.h        | 60 ++++++++++----------
 lib/librte_eal/include/rte_lcore.h         | 35 ++++++++----
 lib/librte_eal/linux/eal.c                 | 28 +++++-----
 lib/librte_eal/linux/eal_memory.c          | 10 ++--
 lib/librte_eal/linux/eal_thread.c          | 32 +++++------
 lib/librte_eal/rte_eal_version.map         |  2 +-
 lib/librte_eal/windows/eal.c               | 16 +++---
 lib/librte_eal/windows/eal_thread.c        | 30 +++++-----
 22 files changed, 230 insertions(+), 211 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 604f198059c5..1eb8bd3643f1 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -20,25 +20,6 @@ Deprecation Notices
 * kvargs: The function ``rte_kvargs_process`` will get a new parameter
   for returning key match count. It will ease handling of no-match case.
 
-* eal: To be more inclusive in choice of naming, the DPDK project
-  will replace uses of master/slave in the API's and command line arguments.
-
-  References to master/slave in relation to lcore will be renamed
-  to initial/worker.  The function ``rte_get_master_lcore()``
-  will be renamed to ``rte_get_initial_lcore()``.
-  For the 20.11 release, both names will be present and the
-  old function will be marked with the deprecated tag.
-  The old function will be removed in a future version.
-
-  The iterator for worker lcores will also change:
-  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
-  ``RTE_LCORE_FOREACH_WORKER``.
-
-  The ``master-lcore`` argument to testpmd will be replaced
-  with ``initial-lcore``. The old ``master-lcore`` argument
-  will produce a runtime notification in 20.11 release, and
-  be removed completely in a future release.
-
 * eal: The terms blacklist and whitelist to describe devices used
   by DPDK will be replaced in the 20.11 relase.
   This will apply to command line arguments as well as macros.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 708ebb01c85d..c1a907390a79 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -430,6 +430,17 @@ API Changes
 * sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
   from ``struct rte_sched_subport_params``.
 
+* eal: Changed the function ``rte_get_master_lcore()`` is
+  replaced to ``rte_get_main_lcore()``. The old function is deprecated.
+
+  The iterator for worker lcores will also change:
+  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
+  ``RTE_LCORE_FOREACH_WORKER``.
+
+  The ``master-lcore`` argument to testpmd will be replaced
+  with ``main-lcore``. The old ``master-lcore`` argument
+  will produce a runtime notification in 20.11 release, and
+  be removed completely in a future release.
 
 ABI Changes
 -----------
diff --git a/lib/librte_eal/common/eal_common_dynmem.c b/lib/librte_eal/common/eal_common_dynmem.c
index 614648d8a4de..1cefe52443c4 100644
--- a/lib/librte_eal/common/eal_common_dynmem.c
+++ b/lib/librte_eal/common/eal_common_dynmem.c
@@ -427,19 +427,19 @@ eal_dynmem_calc_num_pages_per_socket(
 			total_size -= default_size;
 		}
 #else
-		/* in 32-bit mode, allocate all of the memory only on master
+		/* in 32-bit mode, allocate all of the memory only on main
 		 * lcore socket
 		 */
 		total_size = internal_conf->memory;
 		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
 				socket++) {
 			struct rte_config *cfg = rte_eal_get_configuration();
-			unsigned int master_lcore_socket;
+			unsigned int main_lcore_socket;
 
-			master_lcore_socket =
-				rte_lcore_to_socket_id(cfg->master_lcore);
+			main_lcore_socket =
+				rte_lcore_to_socket_id(cfg->main_lcore);
 
-			if (master_lcore_socket != socket)
+			if (main_lcore_socket != socket)
 				continue;
 
 			/* Update sizes */
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index cf52d717f68e..34f854ad80c8 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -21,55 +21,55 @@
  * Wait until a lcore finished its job.
  */
 int
-rte_eal_wait_lcore(unsigned slave_id)
+rte_eal_wait_lcore(unsigned worker_id)
 {
-	if (lcore_config[slave_id].state == WAIT)
+	if (lcore_config[worker_id].state == WAIT)
 		return 0;
 
-	while (lcore_config[slave_id].state != WAIT &&
-	       lcore_config[slave_id].state != FINISHED)
+	while (lcore_config[worker_id].state != WAIT &&
+	       lcore_config[worker_id].state != FINISHED)
 		rte_pause();
 
 	rte_rmb();
 
 	/* we are in finished state, go to wait state */
-	lcore_config[slave_id].state = WAIT;
-	return lcore_config[slave_id].ret;
+	lcore_config[worker_id].state = WAIT;
+	return lcore_config[worker_id].ret;
 }
 
 /*
- * Check that every SLAVE lcores are in WAIT state, then call
- * rte_eal_remote_launch() for all of them. If call_master is true
- * (set to CALL_MASTER), also call the function on the master lcore.
+ * Check that every WORKER lcores are in WAIT state, then call
+ * rte_eal_remote_launch() for all of them. If call_main is true
+ * (set to CALL_MAIN), also call the function on the main lcore.
  */
 int
 rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
-			 enum rte_rmt_call_master_t call_master)
+			 enum rte_rmt_call_main_t call_main)
 {
 	int lcore_id;
-	int master = rte_get_master_lcore();
+	int main_lcore = rte_get_main_lcore();
 
 	/* check state of lcores */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (lcore_config[lcore_id].state != WAIT)
 			return -EBUSY;
 	}
 
 	/* send messages to cores */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		rte_eal_remote_launch(f, arg, lcore_id);
 	}
 
-	if (call_master == CALL_MASTER) {
-		lcore_config[master].ret = f(arg);
-		lcore_config[master].state = FINISHED;
+	if (call_main == CALL_MAIN) {
+		lcore_config[main_lcore].ret = f(arg);
+		lcore_config[main_lcore].state = FINISHED;
 	}
 
 	return 0;
 }
 
 /*
- * Return the state of the lcore identified by slave_id.
+ * Return the state of the lcore identified by worker_id.
  */
 enum rte_lcore_state_t
 rte_eal_get_lcore_state(unsigned lcore_id)
@@ -86,7 +86,7 @@ rte_eal_mp_wait_lcore(void)
 {
 	unsigned lcore_id;
 
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		rte_eal_wait_lcore(lcore_id);
 	}
 }
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index d64569b3c758..66d6bad1a7d7 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -18,9 +18,9 @@
 #include "eal_private.h"
 #include "eal_thread.h"
 
-unsigned int rte_get_master_lcore(void)
+unsigned int rte_get_main_lcore(void)
 {
-	return rte_eal_get_configuration()->master_lcore;
+	return rte_eal_get_configuration()->main_lcore;
 }
 
 unsigned int rte_lcore_count(void)
@@ -93,7 +93,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
 	return cfg->lcore_role[lcore_id] == ROLE_RTE;
 }
 
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
 {
 	i++;
 	if (wrap)
@@ -101,7 +101,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
 
 	while (i < RTE_MAX_LCORE) {
 		if (!rte_lcore_is_enabled(i) ||
-		    (skip_master && (i == rte_get_master_lcore()))) {
+		    (skip_main && (i == rte_get_main_lcore()))) {
 			i++;
 			if (wrap)
 				i %= RTE_MAX_LCORE;
diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index a5426e12346a..d221886eb22c 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -81,6 +81,7 @@ eal_long_options[] = {
 	{OPT_TRACE_BUF_SIZE,    1, NULL, OPT_TRACE_BUF_SIZE_NUM   },
 	{OPT_TRACE_MODE,        1, NULL, OPT_TRACE_MODE_NUM       },
 	{OPT_MASTER_LCORE,      1, NULL, OPT_MASTER_LCORE_NUM     },
+	{OPT_MAIN_LCORE,        1, NULL, OPT_MAIN_LCORE_NUM       },
 	{OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM},
 	{OPT_NO_HPET,           0, NULL, OPT_NO_HPET_NUM          },
 	{OPT_NO_HUGE,           0, NULL, OPT_NO_HUGE_NUM          },
@@ -144,7 +145,7 @@ struct device_option {
 static struct device_option_list devopt_list =
 TAILQ_HEAD_INITIALIZER(devopt_list);
 
-static int master_lcore_parsed;
+static int main_lcore_parsed;
 static int mem_parsed;
 static int core_parsed;
 
@@ -575,12 +576,12 @@ eal_parse_service_coremask(const char *coremask)
 		for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE;
 				j++, idx++) {
 			if ((1 << j) & val) {
-				/* handle master lcore already parsed */
+				/* handle main lcore already parsed */
 				uint32_t lcore = idx;
-				if (master_lcore_parsed &&
-						cfg->master_lcore == lcore) {
+				if (main_lcore_parsed &&
+						cfg->main_lcore == lcore) {
 					RTE_LOG(ERR, EAL,
-						"lcore %u is master lcore, cannot use as service core\n",
+						"lcore %u is main lcore, cannot use as service core\n",
 						idx);
 					return -1;
 				}
@@ -748,12 +749,12 @@ eal_parse_service_corelist(const char *corelist)
 				min = idx;
 			for (idx = min; idx <= max; idx++) {
 				if (cfg->lcore_role[idx] != ROLE_SERVICE) {
-					/* handle master lcore already parsed */
+					/* handle main lcore already parsed */
 					uint32_t lcore = idx;
-					if (cfg->master_lcore == lcore &&
-							master_lcore_parsed) {
+					if (cfg->main_lcore == lcore &&
+							main_lcore_parsed) {
 						RTE_LOG(ERR, EAL,
-							"Error: lcore %u is master lcore, cannot use as service core\n",
+							"Error: lcore %u is main lcore, cannot use as service core\n",
 							idx);
 						return -1;
 					}
@@ -836,25 +837,25 @@ eal_parse_corelist(const char *corelist, int *cores)
 	return 0;
 }
 
-/* Changes the lcore id of the master thread */
+/* Changes the lcore id of the main thread */
 static int
-eal_parse_master_lcore(const char *arg)
+eal_parse_main_lcore(const char *arg)
 {
 	char *parsing_end;
 	struct rte_config *cfg = rte_eal_get_configuration();
 
 	errno = 0;
-	cfg->master_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
+	cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
 	if (errno || parsing_end[0] != 0)
 		return -1;
-	if (cfg->master_lcore >= RTE_MAX_LCORE)
+	if (cfg->main_lcore >= RTE_MAX_LCORE)
 		return -1;
-	master_lcore_parsed = 1;
+	main_lcore_parsed = 1;
 
-	/* ensure master core is not used as service core */
-	if (lcore_config[cfg->master_lcore].core_role == ROLE_SERVICE) {
+	/* ensure main core is not used as service core */
+	if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
 		RTE_LOG(ERR, EAL,
-			"Error: Master lcore is used as a service core\n");
+			"Error: Main lcore is used as a service core\n");
 		return -1;
 	}
 
@@ -1593,9 +1594,14 @@ eal_parse_common_option(int opt, const char *optarg,
 		break;
 
 	case OPT_MASTER_LCORE_NUM:
-		if (eal_parse_master_lcore(optarg) < 0) {
+		fprintf(stderr,
+			"Option --" OPT_MASTER_LCORE
+			" is deprecated use " OPT_MAIN_LCORE "\n");
+		/* fallthrough */
+	case OPT_MAIN_LCORE_NUM:
+		if (eal_parse_main_lcore(optarg) < 0) {
 			RTE_LOG(ERR, EAL, "invalid parameter for --"
-					OPT_MASTER_LCORE "\n");
+					OPT_MAIN_LCORE "\n");
 			return -1;
 		}
 		break;
@@ -1763,9 +1769,9 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
 
 	RTE_CPU_AND(cpuset, cpuset, &default_set);
 
-	/* if no remaining cpu, use master lcore cpu affinity */
+	/* if no remaining cpu, use main lcore cpu affinity */
 	if (!CPU_COUNT(cpuset)) {
-		memcpy(cpuset, &lcore_config[rte_get_master_lcore()].cpuset,
+		memcpy(cpuset, &lcore_config[rte_get_main_lcore()].cpuset,
 			sizeof(*cpuset));
 	}
 }
@@ -1797,12 +1803,12 @@ eal_adjust_config(struct internal_config *internal_cfg)
 	if (internal_conf->process_type == RTE_PROC_AUTO)
 		internal_conf->process_type = eal_proc_type_detect();
 
-	/* default master lcore is the first one */
-	if (!master_lcore_parsed) {
-		cfg->master_lcore = rte_get_next_lcore(-1, 0, 0);
-		if (cfg->master_lcore >= RTE_MAX_LCORE)
+	/* default main lcore is the first one */
+	if (!main_lcore_parsed) {
+		cfg->main_lcore = rte_get_next_lcore(-1, 0, 0);
+		if (cfg->main_lcore >= RTE_MAX_LCORE)
 			return -1;
-		lcore_config[cfg->master_lcore].core_role = ROLE_RTE;
+		lcore_config[cfg->main_lcore].core_role = ROLE_RTE;
 	}
 
 	compute_ctrl_threads_cpuset(internal_cfg);
@@ -1822,8 +1828,8 @@ eal_check_common_options(struct internal_config *internal_cfg)
 	const struct internal_config *internal_conf =
 		eal_get_internal_configuration();
 
-	if (cfg->lcore_role[cfg->master_lcore] != ROLE_RTE) {
-		RTE_LOG(ERR, EAL, "Master lcore is not enabled for DPDK\n");
+	if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+		RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n");
 		return -1;
 	}
 
@@ -1921,7 +1927,7 @@ eal_common_usage(void)
 	       "                      '( )' can be omitted for single element group,\n"
 	       "                      '@' can be omitted if cpus and lcores have the same value\n"
 	       "  -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n"
-	       "  --"OPT_MASTER_LCORE" ID   Core ID that is used as master\n"
+	       "  --"OPT_MAIN_LCORE" ID     Core ID that is used as main\n"
 	       "  --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n"
 	       "  -n CHANNELS         Number of memory channels\n"
 	       "  -m MB               Memory to allocate (see also --"OPT_SOCKET_MEM")\n"
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index 89769d48b487..d363228a7a25 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -43,6 +43,8 @@ enum {
 	OPT_TRACE_BUF_SIZE_NUM,
 #define OPT_TRACE_MODE        "trace-mode"
 	OPT_TRACE_MODE_NUM,
+#define OPT_MAIN_LCORE        "main-lcore"
+	OPT_MAIN_LCORE_NUM,
 #define OPT_MASTER_LCORE      "master-lcore"
 	OPT_MASTER_LCORE_NUM,
 #define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name"
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index a6a6381567f4..4684c4c7df19 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -20,8 +20,8 @@
  */
 struct lcore_config {
 	pthread_t thread_id;       /**< pthread identifier */
-	int pipe_master2slave[2];  /**< communication pipe with master */
-	int pipe_slave2master[2];  /**< communication pipe with master */
+	int pipe_main2worker[2];   /**< communication pipe with main */
+	int pipe_worker2main[2];   /**< communication pipe with main */
 
 	lcore_function_t * volatile f; /**< function to call */
 	void * volatile arg;       /**< argument of function */
@@ -42,7 +42,7 @@ extern struct lcore_config lcore_config[RTE_MAX_LCORE];
  * The global RTE configuration structure.
  */
 struct rte_config {
-	uint32_t master_lcore;       /**< Id of the master lcore */
+	uint32_t main_lcore;         /**< Id of the main lcore */
 	uint32_t lcore_count;        /**< Number of available logical cores. */
 	uint32_t numa_node_count;    /**< Number of detected NUMA nodes. */
 	uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
diff --git a/lib/librte_eal/common/rte_random.c b/lib/librte_eal/common/rte_random.c
index b2c5416b331d..ce21c2242a22 100644
--- a/lib/librte_eal/common/rte_random.c
+++ b/lib/librte_eal/common/rte_random.c
@@ -122,7 +122,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
 	lcore_id = rte_lcore_id();
 
 	if (unlikely(lcore_id == LCORE_ID_ANY))
-		lcore_id = rte_get_master_lcore();
+		lcore_id = rte_get_main_lcore();
 
 	return &rand_states[lcore_id];
 }
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index 98565bbef340..6c955d319ad4 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -107,7 +107,7 @@ rte_service_init(void)
 	struct rte_config *cfg = rte_eal_get_configuration();
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
 		if (lcore_config[i].core_role == ROLE_SERVICE) {
-			if ((unsigned int)i == cfg->master_lcore)
+			if ((unsigned int)i == cfg->main_lcore)
 				continue;
 			rte_service_lcore_add(i);
 			count++;
diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
index ccea60afe77b..d6ea02375025 100644
--- a/lib/librte_eal/freebsd/eal.c
+++ b/lib/librte_eal/freebsd/eal.c
@@ -625,10 +625,10 @@ eal_check_mem_on_local_socket(void)
 	int socket_id;
 	const struct rte_config *config = rte_eal_get_configuration();
 
-	socket_id = rte_lcore_to_socket_id(config->master_lcore);
+	socket_id = rte_lcore_to_socket_id(config->main_lcore);
 
 	if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
-		RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+		RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
 }
 
 
@@ -851,29 +851,29 @@ rte_eal_init(int argc, char **argv)
 	eal_check_mem_on_local_socket();
 
 	if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
-			&lcore_config[config->master_lcore].cpuset) != 0) {
+			&lcore_config[config->main_lcore].cpuset) != 0) {
 		rte_eal_init_alert("Cannot set affinity");
 		rte_errno = EINVAL;
 		return -1;
 	}
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
 
-	RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
-		config->master_lcore, thread_id, cpuset,
+	RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
+		config->main_lcore, thread_id, cpuset,
 		ret == 0 ? "" : "...");
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (pipe(lcore_config[i].pipe_master2slave) < 0)
+		if (pipe(lcore_config[i].pipe_main2worker) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (pipe(lcore_config[i].pipe_slave2master) < 0)
+		if (pipe(lcore_config[i].pipe_worker2main) < 0)
 			rte_panic("Cannot create pipe\n");
 
 		lcore_config[i].state = WAIT;
@@ -886,7 +886,7 @@ rte_eal_init(int argc, char **argv)
 
 		/* Set thread_name for aid in debugging. */
 		snprintf(thread_name, sizeof(thread_name),
-				"lcore-slave-%d", i);
+				"lcore-worker-%d", i);
 		rte_thread_setname(lcore_config[i].thread_id, thread_name);
 
 		ret = pthread_setaffinity_np(lcore_config[i].thread_id,
@@ -896,10 +896,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 
 	/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/freebsd/eal_thread.c b/lib/librte_eal/freebsd/eal_thread.c
index 99b5fefc4c5b..1dce9b04f24a 100644
--- a/lib/librte_eal/freebsd/eal_thread.c
+++ b/lib/librte_eal/freebsd/eal_thread.c
@@ -26,35 +26,35 @@
 #include "eal_thread.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 	int rc = -EBUSY;
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		goto finish;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = write(m2s, &c, 1);
+		n = write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = read(s2m, &c, 1);
+		n = read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
 
 	rc = 0;
 finish:
-	rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+	rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
 	return rc;
 }
 
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
 	int n, ret;
 	unsigned lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
 
 		/* wait command */
 		do {
-			n = read(m2s, &c, 1);
+			n = read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = write(s2m, &c, 1);
+			n = write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index e3c2ef185eed..0ae12cf4fbac 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -65,11 +65,11 @@ int rte_eal_iopl_init(void);
 /**
  * Initialize the Environment Abstraction Layer (EAL).
  *
- * This function is to be executed on the MASTER lcore only, as soon
+ * This function is to be executed on the MAIN lcore only, as soon
  * as possible in the application's main() function.
  *
  * The function finishes the initialization process before main() is called.
- * It puts the SLAVE lcores in the WAIT state.
+ * It puts the WORKER lcores in the WAIT state.
  *
  * When the multi-partition feature is supported, depending on the
  * configuration (if CONFIG_RTE_EAL_MAIN_PARTITION is disabled), this
diff --git a/lib/librte_eal/include/rte_eal_trace.h b/lib/librte_eal/include/rte_eal_trace.h
index 19df549d29be..495ae1ee1d61 100644
--- a/lib/librte_eal/include/rte_eal_trace.h
+++ b/lib/librte_eal/include/rte_eal_trace.h
@@ -264,10 +264,10 @@ RTE_TRACE_POINT(
 RTE_TRACE_POINT(
 	rte_eal_trace_thread_remote_launch,
 	RTE_TRACE_POINT_ARGS(int (*f)(void *), void *arg,
-		unsigned int slave_id, int rc),
+		unsigned int worker_id, int rc),
 	rte_trace_point_emit_ptr(f);
 	rte_trace_point_emit_ptr(arg);
-	rte_trace_point_emit_u32(slave_id);
+	rte_trace_point_emit_u32(worker_id);
 	rte_trace_point_emit_int(rc);
 )
 RTE_TRACE_POINT(
diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h
index 06a671752ace..22a901ce62f6 100644
--- a/lib/librte_eal/include/rte_launch.h
+++ b/lib/librte_eal/include/rte_launch.h
@@ -32,12 +32,12 @@ typedef int (lcore_function_t)(void *);
 /**
  * Launch a function on another lcore.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * Sends a message to a slave lcore (identified by the slave_id) that
+ * Sends a message to a worker lcore (identified by the worker_id) that
  * is in the WAIT state (this is true after the first call to
  * rte_eal_init()). This can be checked by first calling
- * rte_eal_wait_lcore(slave_id).
+ * rte_eal_wait_lcore(worker_id).
  *
  * When the remote lcore receives the message, it switches to
  * the RUNNING state, then calls the function f with argument arg. Once the
@@ -45,7 +45,7 @@ typedef int (lcore_function_t)(void *);
  * the return value of f is stored in a local variable to be read using
  * rte_eal_wait_lcore().
  *
- * The MASTER lcore returns as soon as the message is sent and knows
+ * The MAIN lcore returns as soon as the message is sent and knows
  * nothing about the completion of f.
  *
  * Note: This function is not designed to offer optimum
@@ -56,37 +56,41 @@ typedef int (lcore_function_t)(void *);
  *   The function to be called.
  * @param arg
  *   The argument for the function.
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore on which the function should be executed.
  * @return
  *   - 0: Success. Execution of function f started on the remote lcore.
  *   - (-EBUSY): The remote lcore is not in a WAIT state.
  */
-int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id);
+int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned worker_id);
 
 /**
- * This enum indicates whether the master core must execute the handler
+ * This enum indicates whether the main core must execute the handler
  * launched on all logical cores.
  */
-enum rte_rmt_call_master_t {
-	SKIP_MASTER = 0, /**< lcore handler not executed by master core. */
-	CALL_MASTER,     /**< lcore handler executed by master core. */
+enum rte_rmt_call_main_t {
+	SKIP_MAIN = 0, /**< lcore handler not executed by main core. */
+	CALL_MAIN,     /**< lcore handler executed by main core. */
 };
 
+/* These legacy definitions will be removed in future release */
+#define SKIP_MASTER	RTE_DEPRECATED(SKIP_MASTER) SKIP_MAIN
+#define CALL_MASTER	RTE_DEPRECATED(CALL_MASTER) CALL_MAIN
+
 /**
  * Launch a function on all lcores.
  *
- * Check that each SLAVE lcore is in a WAIT state, then call
+ * Check that each WORKER lcore is in a WAIT state, then call
  * rte_eal_remote_launch() for each lcore.
  *
  * @param f
  *   The function to be called.
  * @param arg
  *   The argument for the function.
- * @param call_master
- *   If call_master set to SKIP_MASTER, the MASTER lcore does not call
- *   the function. If call_master is set to CALL_MASTER, the function
- *   is also called on master before returning. In any case, the master
+ * @param call_main
+ *   If call_main set to SKIP_MAIN, the MAIN lcore does not call
+ *   the function. If call_main is set to CALL_MAIN, the function
+ *   is also called on main before returning. In any case, the main
  *   lcore returns as soon as it finished its job and knows nothing
  *   about the completion of f on the other lcores.
  * @return
@@ -95,49 +99,49 @@ enum rte_rmt_call_master_t {
  *     case, no message is sent to any of the lcores.
  */
 int rte_eal_mp_remote_launch(lcore_function_t *f, void *arg,
-			     enum rte_rmt_call_master_t call_master);
+			     enum rte_rmt_call_main_t call_main);
 
 /**
- * Get the state of the lcore identified by slave_id.
+ * Get the state of the lcore identified by worker_id.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore.
  * @return
  *   The state of the lcore.
  */
-enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned slave_id);
+enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned int worker_id);
 
 /**
  * Wait until an lcore finishes its job.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * If the slave lcore identified by the slave_id is in a FINISHED state,
+ * If the worker lcore identified by the worker_id is in a FINISHED state,
  * switch to the WAIT state. If the lcore is in RUNNING state, wait until
  * the lcore finishes its job and moves to the FINISHED state.
  *
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore.
  * @return
- *   - 0: If the lcore identified by the slave_id is in a WAIT state.
+ *   - 0: If the lcore identified by the worker_id is in a WAIT state.
  *   - The value that was returned by the previous remote launch
- *     function call if the lcore identified by the slave_id was in a
+ *     function call if the lcore identified by the worker_id was in a
  *     FINISHED or RUNNING state. In this case, it changes the state
  *     of the lcore to WAIT.
  */
-int rte_eal_wait_lcore(unsigned slave_id);
+int rte_eal_wait_lcore(unsigned worker_id);
 
 /**
  * Wait until all lcores finish their jobs.
  *
- * To be executed on the MASTER lcore only. Issue an
+ * To be executed on the MAIN lcore only. Issue an
  * rte_eal_wait_lcore() for every lcore. The return values are
  * ignored.
  *
  * After a call to rte_eal_mp_wait_lcore(), the caller can assume
- * that all slave lcores are in a WAIT state.
+ * that all worker lcores are in a WAIT state.
  */
 void rte_eal_mp_wait_lcore(void);
 
diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h
index b8b64a625200..48b87e253afa 100644
--- a/lib/librte_eal/include/rte_lcore.h
+++ b/lib/librte_eal/include/rte_lcore.h
@@ -78,12 +78,24 @@ rte_lcore_id(void)
 }
 
 /**
- * Get the id of the master lcore
+ * Get the id of the main lcore
  *
  * @return
- *   the id of the master lcore
+ *   the id of the main lcore
  */
-unsigned int rte_get_master_lcore(void);
+unsigned int rte_get_main_lcore(void);
+
+/**
+ * Deprecated function the id of the main lcore
+ *
+ * @return
+ *   the id of the main lcore
+ */
+__rte_deprecated
+static inline unsigned int rte_get_master_lcore(void)
+{
+	return rte_get_main_lcore();
+}
 
 /**
  * Return the number of execution units (lcores) on the system.
@@ -203,32 +215,35 @@ int rte_lcore_is_enabled(unsigned int lcore_id);
  *
  * @param i
  *   The current lcore (reference).
- * @param skip_master
- *   If true, do not return the ID of the master lcore.
+ * @param skip_main
+ *   If true, do not return the ID of the main lcore.
  * @param wrap
  *   If true, go back to 0 when RTE_MAX_LCORE is reached; otherwise,
  *   return RTE_MAX_LCORE.
  * @return
  *   The next lcore_id or RTE_MAX_LCORE if not found.
  */
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap);
 
 /**
  * Macro to browse all running lcores.
  */
 #define RTE_LCORE_FOREACH(i)						\
 	for (i = rte_get_next_lcore(-1, 0, 0);				\
-	     i<RTE_MAX_LCORE;						\
+	     i < RTE_MAX_LCORE;						\
 	     i = rte_get_next_lcore(i, 0, 0))
 
 /**
- * Macro to browse all running lcores except the master lcore.
+ * Macro to browse all running lcores except the main lcore.
  */
-#define RTE_LCORE_FOREACH_SLAVE(i)					\
+#define RTE_LCORE_FOREACH_WORKER(i)					\
 	for (i = rte_get_next_lcore(-1, 1, 0);				\
-	     i<RTE_MAX_LCORE;						\
+	     i < RTE_MAX_LCORE;						\
 	     i = rte_get_next_lcore(i, 1, 0))
 
+#define RTE_LCORE_FOREACH_SLAVE(l)					\
+	RTE_DEPRECATED(RTE_LCORE_FOREACH_SLAVE) RTE_LCORE_FOREACH_WORKER(l)
+
 /**
  * Callback prototype for initializing lcores.
  *
diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
index 9cf0e2ec0137..1c9dd8db1e6a 100644
--- a/lib/librte_eal/linux/eal.c
+++ b/lib/librte_eal/linux/eal.c
@@ -883,10 +883,10 @@ eal_check_mem_on_local_socket(void)
 	int socket_id;
 	const struct rte_config *config = rte_eal_get_configuration();
 
-	socket_id = rte_lcore_to_socket_id(config->master_lcore);
+	socket_id = rte_lcore_to_socket_id(config->main_lcore);
 
 	if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
-		RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+		RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
 }
 
 static int
@@ -1215,28 +1215,28 @@ rte_eal_init(int argc, char **argv)
 	eal_check_mem_on_local_socket();
 
 	if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
-			&lcore_config[config->master_lcore].cpuset) != 0) {
+			&lcore_config[config->main_lcore].cpuset) != 0) {
 		rte_eal_init_alert("Cannot set affinity");
 		rte_errno = EINVAL;
 		return -1;
 	}
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
-	RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
-		config->master_lcore, (uintptr_t)thread_id, cpuset,
+	RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+		config->main_lcore, (uintptr_t)thread_id, cpuset,
 		ret == 0 ? "" : "...");
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (pipe(lcore_config[i].pipe_master2slave) < 0)
+		if (pipe(lcore_config[i].pipe_main2worker) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (pipe(lcore_config[i].pipe_slave2master) < 0)
+		if (pipe(lcore_config[i].pipe_worker2main) < 0)
 			rte_panic("Cannot create pipe\n");
 
 		lcore_config[i].state = WAIT;
@@ -1249,7 +1249,7 @@ rte_eal_init(int argc, char **argv)
 
 		/* Set thread_name for aid in debugging. */
 		snprintf(thread_name, sizeof(thread_name),
-			"lcore-slave-%d", i);
+			"lcore-worker-%d", i);
 		ret = rte_thread_setname(lcore_config[i].thread_id,
 						thread_name);
 		if (ret != 0)
@@ -1263,10 +1263,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 
 	/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 89725291b0ce..3e47efe58212 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -1737,7 +1737,7 @@ memseg_primary_init_32(void)
 	/* the allocation logic is a little bit convoluted, but here's how it
 	 * works, in a nutshell:
 	 *  - if user hasn't specified on which sockets to allocate memory via
-	 *    --socket-mem, we allocate all of our memory on master core socket.
+	 *    --socket-mem, we allocate all of our memory on main core socket.
 	 *  - if user has specified sockets to allocate memory on, there may be
 	 *    some "unused" memory left (e.g. if user has specified --socket-mem
 	 *    such that not all memory adds up to 2 gigabytes), so add it to all
@@ -1751,7 +1751,7 @@ memseg_primary_init_32(void)
 	for (i = 0; i < rte_socket_count(); i++) {
 		int hp_sizes = (int) internal_conf->num_hugepage_sizes;
 		uint64_t max_socket_mem, cur_socket_mem;
-		unsigned int master_lcore_socket;
+		unsigned int main_lcore_socket;
 		struct rte_config *cfg = rte_eal_get_configuration();
 		bool skip;
 
@@ -1767,10 +1767,10 @@ memseg_primary_init_32(void)
 		skip = active_sockets != 0 &&
 				internal_conf->socket_mem[socket_id] == 0;
 		/* ...or if we didn't specifically request memory on *any*
-		 * socket, and this is not master lcore
+		 * socket, and this is not main lcore
 		 */
-		master_lcore_socket = rte_lcore_to_socket_id(cfg->master_lcore);
-		skip |= active_sockets == 0 && socket_id != master_lcore_socket;
+		main_lcore_socket = rte_lcore_to_socket_id(cfg->main_lcore);
+		skip |= active_sockets == 0 && socket_id != main_lcore_socket;
 
 		if (skip) {
 			RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n",
diff --git a/lib/librte_eal/linux/eal_thread.c b/lib/librte_eal/linux/eal_thread.c
index 068de2559555..83c2034b93d5 100644
--- a/lib/librte_eal/linux/eal_thread.c
+++ b/lib/librte_eal/linux/eal_thread.c
@@ -26,35 +26,35 @@
 #include "eal_thread.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned int worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 	int rc = -EBUSY;
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		goto finish;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = write(m2s, &c, 1);
+		n = write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = read(s2m, &c, 1);
+		n = read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
 
 	rc = 0;
 finish:
-	rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+	rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
 	return rc;
 }
 
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
 	int n, ret;
 	unsigned lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
 
 		/* wait command */
 		do {
-			n = read(m2s, &c, 1);
+			n = read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = write(s2m, &c, 1);
+			n = write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index f56de02d8f6c..cd41167b2121 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -73,7 +73,7 @@ DPDK_21 {
 	rte_free;
 	rte_get_hpet_cycles;
 	rte_get_hpet_hz;
-	rte_get_master_lcore;
+	rte_get_main_lcore;
 	rte_get_next_lcore;
 	rte_get_tsc_hz;
 	rte_hexdump;
diff --git a/lib/librte_eal/windows/eal.c b/lib/librte_eal/windows/eal.c
index 141f22adb7dc..6334aca03df2 100644
--- a/lib/librte_eal/windows/eal.c
+++ b/lib/librte_eal/windows/eal.c
@@ -355,8 +355,8 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	bscan = rte_bus_scan();
 	if (bscan < 0) {
@@ -365,16 +365,16 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (_pipe(lcore_config[i].pipe_master2slave,
+		if (_pipe(lcore_config[i].pipe_main2worker,
 			sizeof(char), _O_BINARY) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (_pipe(lcore_config[i].pipe_slave2master,
+		if (_pipe(lcore_config[i].pipe_worker2main,
 			sizeof(char), _O_BINARY) < 0)
 			rte_panic("Cannot create pipe\n");
 
@@ -399,10 +399,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 	return fctret;
 }
diff --git a/lib/librte_eal/windows/eal_thread.c b/lib/librte_eal/windows/eal_thread.c
index 20889b6196c9..908e726d16cc 100644
--- a/lib/librte_eal/windows/eal_thread.c
+++ b/lib/librte_eal/windows/eal_thread.c
@@ -17,34 +17,34 @@
 #include "eal_windows.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id)
+rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		return -EBUSY;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = _write(m2s, &c, 1);
+		n = _write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = _read(s2m, &c, 1);
+		n = _read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -61,21 +61,21 @@ eal_thread_loop(void *arg __rte_unused)
 	int n, ret;
 	unsigned int lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -88,7 +88,7 @@ eal_thread_loop(void *arg __rte_unused)
 
 		/* wait command */
 		do {
-			n = _read(m2s, &c, 1);
+			n = _read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -99,7 +99,7 @@ eal_thread_loop(void *arg __rte_unused)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = _write(s2m, &c, 1);
+			n = _write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
-- 
2.27.0


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 13:07  0%                       ` Andrew Rybchenko
  2020-10-15 13:57  0%                         ` Slava Ovsiienko
@ 2020-10-15 20:22  0%                         ` Slava Ovsiienko
  1 sibling, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-10-15 20:22 UTC (permalink / raw)
  To: Andrew Rybchenko, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
	Jerin Jacob, Andrew Rybchenko
  Cc: dpdk-dev, Stephen Hemminger, Olivier Matz, Maxime Coquelin,
	David Marchand

Hi, 

Evening update:
- addressed code comments
- provided the union of segmentation description  with dedicated feature structures according Jerin's proposal
- added the reporting of split limitation

With best regards, Slava

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, October 15, 2020 16:07
> To: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Jerin Jacob <jerinjacobk@gmail.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Cc: dpdk-dev <dev@dpdk.org>; Stephen Hemminger
> <stephen@networkplumber.org>; Olivier Matz <olivier.matz@6wind.com>;
> Maxime Coquelin <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>
> Subject: Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
> 
> On 10/15/20 3:49 PM, Thomas Monjalon wrote:
> > 15/10/2020 13:49, Slava Ovsiienko:
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>> On 10/15/2020 12:26 PM, Jerin Jacob wrote:
> >>>
> >>> <...>
> >>>
> >>>>>>>> If we see some of the features of such kind or other PMDs
> >>>>>>>> adopts the split feature - we'll try to find the common root
> >>>>>>>> and consider the way how
> >>>>>> to report it.
> >>>>>>>
> >>>>>>> My only concern with that approach will be ABI break again if
> >>>>>>> something needs to exposed over rte_eth_dev_info().
> >>>>>
> >>>>> Let's reserve the pointer to struct rte_eth_rxseg_limitations in
> >>>>> the rte_eth_dev_info to avoid ABI break?
> >>>>
> >>>> Works for me. If we add an additional reserved field.
> >>>>
> >>>> Due to RC1 time constraint, I am OK to leave it as a reserved filed
> >>>> and fill meat when it is required if other ethdev maintainers are OK.
> >>>> I will be required for feature complete.
> >>>>
> >>>
> >>> Sounds good to me.
> >
> > OK for me.
> 
> OK as well, but I dislike the idea with pointer in dev_info.
> It sounds like it breaks existing practice.
> We should either reserve enough space or simply add dedicated API call to
> report Rx seg capabilities.
> 
> >
> >> OK, let's introduce the pointer in the rte_eth_dev_info and define
> >> struct rte_eth_rxseg_limitations as experimental.
> >> Will it be allowed to update this one later (after 20.11)?
> >> Is ABI break is allowed for the case?
> >
> > If it is experimental, you can change it at anytime.
> >
> > Ideally, we could try to have a first version of the limitations
> > during 20.11-rc2.
> 
> Yes, please.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement
  2020-10-15 18:07  4%   ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
@ 2020-10-15 18:27  4%     ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2020-10-15 18:27 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: Jerin Jacob, Mattias Rönnblom, Liang Ma, Peter Mccarthy,
	Nipun Gupta, Pavan Nikhilesh, dpdk-dev, Erik Gabriel Carrillo,
	Gage Eads, Van Haaren, Harry, Hemant Agrawal, Richardson, Bruce

On Thu, Oct 15, 2020 at 11:36 PM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> The announcement made in 20.08 is no longer required.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 13 -------------

Acked-by: Jerin Jacob <jerinj@marvell.com>

Series squashed and applied to dpdk-next-eventdev/for-main. Thanks.


>  1 file changed, 13 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index efd7710..08f1c04 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -189,19 +189,6 @@ Deprecation Notices
>    ``rte_cryptodev_scheduler_worker_detach`` and
>    ``rte_cryptodev_scheduler_workers_get`` accordingly.
>
> -* eventdev: Following structures will be modified to support DLB PMD
> -  and future extensions:
> -
> -  - ``rte_event_dev_info``
> -  - ``rte_event_dev_config``
> -  - ``rte_event_port_conf``
> -
> -  Patches containing justification, documentation, and proposed modifications
> -  can be found at:
> -
> -  - https://patches.dpdk.org/patch/71457/
> -  - https://patches.dpdk.org/patch/71456/
> -
>  * sched: To allow more traffic classes, flexible mapping of pipe queues to
>    traffic classes, and subport level configuration of pipes and queues
>    changes will be made to macros, data structures and API functions defined
> --
> 2.6.4
>

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes
  2020-10-15 18:07  9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
  2020-10-15 18:07  1%   ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
  2020-10-15 18:07  4%   ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
@ 2020-10-15 18:07 13%   ` Timothy McDaniel
  2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 18:07 UTC (permalink / raw)
  Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
	nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
	harry.van.haaren, hemant.agrawal, bruce.richardson

The eventdev ABI changes announced in 20.08 have been implemented
in 20.11. This commit announces the implementation of those changes, and
lists the data structures that were modified.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7878e8e..0f8ee2a 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -352,6 +352,14 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+* ``eventdev`` changes
+
+    * Following structures are modified to support DLB/DLB2 PMDs
+      and future extensions:
+
+    * ``rte_event_dev_info``
+    * ``rte_event_dev_config``
+    * ``rte_event_port_conf``
 
 Known Issues
 ------------
-- 
2.6.4


^ permalink raw reply	[relevance 13%]

* [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement
  2020-10-15 18:07  9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
  2020-10-15 18:07  1%   ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
@ 2020-10-15 18:07  4%   ` Timothy McDaniel
  2020-10-15 18:27  4%     ` Jerin Jacob
  2020-10-15 18:07 13%   ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
  2 siblings, 1 reply; 200+ results
From: Timothy McDaniel @ 2020-10-15 18:07 UTC (permalink / raw)
  Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
	nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
	harry.van.haaren, hemant.agrawal, bruce.richardson

The announcement made in 20.08 is no longer required.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 doc/guides/rel_notes/deprecation.rst | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index efd7710..08f1c04 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -189,19 +189,6 @@ Deprecation Notices
   ``rte_cryptodev_scheduler_worker_detach`` and
   ``rte_cryptodev_scheduler_workers_get`` accordingly.
 
-* eventdev: Following structures will be modified to support DLB PMD
-  and future extensions:
-
-  - ``rte_event_dev_info``
-  - ``rte_event_dev_config``
-  - ``rte_event_port_conf``
-
-  Patches containing justification, documentation, and proposed modifications
-  can be found at:
-
-  - https://patches.dpdk.org/patch/71457/
-  - https://patches.dpdk.org/patch/71456/
-
 * sched: To allow more traffic classes, flexible mapping of pipe queues to
   traffic classes, and subport level configuration of pipes and queues
   changes will be made to macros, data structures and API functions defined
-- 
2.6.4


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints
  2020-10-15 18:07  9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
@ 2020-10-15 18:07  1%   ` Timothy McDaniel
  2020-10-15 18:07  4%   ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
  2020-10-15 18:07 13%   ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
  2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 18:07 UTC (permalink / raw)
  Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
	nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
	harry.van.haaren, hemant.agrawal, bruce.richardson

This commit implements the eventdev ABI changes required by
the DLB/DLB2 PMDs.  Several data structures and constants are modified
or added in this patch, thereby requiring modifications to the
dependent apps and examples.

The DLB/DLB2 hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports a further restricted to a maximum of 1 linked queue.
3) DLB does not have the ability to carry the flow_id as part
   of the event (QE) payload. Note that the DLB2 hardware is capable of
   carrying the flow_id.

Following is a detailed description of the changes that have been made.

1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertize its capabilities so that applications can take
the appropriate actions based on those capabilities.

    struct rte_event_dev_info {
	uint32_t max_event_port_links;
	/**< Maximum number of queues that can be linked to a single event
	 * port by this device.
	 */

	uint8_t max_single_link_event_port_queue_pairs;
	/**< Maximum number of event ports and queues that are optimized for
	 * (and only capable of) single-link configurations supported by this
	 * device. These ports and queues are not accounted for in
	 * max_event_ports or max_event_queues.
	 */
    }

2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.

    /** Event device configuration structure */
    struct rte_event_dev_config {
	uint8_t nb_single_link_event_port_queues;
	/**< Number of event ports and queues that will be singly-linked to
	 * each other. These are a subset of the overall event ports and
	 * queues; this value cannot exceed *nb_event_ports* or
	 * *nb_event_queues*. If the device has ports and queues that are
	 * optimized for single-link usage, this field is a hint for how many
	 * to allocate; otherwise, regular event ports and queues can be used.
	 */
    }

3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only  attribute is
assigned to other, with the remaining bits available for future assignment.

	* Event port configuration bitmap flags */
	#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
	/**< Configure the port not to release outstanding events in
	 * rte_event_dev_dequeue_burst(). If set, all events received through
	 * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
	 * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
	 * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
	 */
	#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)

	/**< This event port links only to a single event queue.
	 *
	 *  @see rte_event_port_setup(), rte_event_port_link()
	 */

	#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
	/**
	 * The implicit release disable attribute of the port
	 */

	struct rte_event_port_conf {
		uint32_t event_port_cfg;
		/**< Port cfg flags(EVENT_PORT_CFG_) */
	}

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
 app/test-eventdev/evt_common.h                     | 11 ++++
 app/test-eventdev/test_order_atq.c                 | 28 +++++++---
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 +++++++---
 app/test/test_eventdev.c                           |  4 +-
 drivers/event/dpaa/dpaa_eventdev.c                 |  3 +-
 drivers/event/dpaa2/dpaa2_eventdev.c               |  5 +-
 drivers/event/dsw/dsw_evdev.c                      |  3 +-
 drivers/event/octeontx/ssovf_evdev.c               |  5 +-
 drivers/event/octeontx2/otx2_evdev.c               |  3 +-
 drivers/event/opdl/opdl_evdev.c                    |  3 +-
 drivers/event/skeleton/skeleton_eventdev.c         |  5 +-
 drivers/event/sw/sw_evdev.c                        |  8 ++-
 drivers/event/sw/sw_evdev_selftest.c               |  6 +-
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +-
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++-
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +-
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++-
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +-
 lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
 lib/librte_eventdev/rte_eventdev.c                 | 65 +++++++++++++++++++---
 lib/librte_eventdev/rte_eventdev.h                 | 51 ++++++++++++++---
 lib/librte_eventdev/rte_eventdev_pmd_pci.h         |  1 -
 lib/librte_eventdev/rte_eventdev_trace.h           |  7 ++-
 lib/librte_eventdev/rte_eventdev_version.map       |  4 +-
 26 files changed, 213 insertions(+), 64 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
 			true : false;
 }
 
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+	struct rte_event_dev_info dev_info;
+
+	rte_event_dev_info_get(dev_id, &dev_info);
+	return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+			true : false;
+}
+
 static inline int
 evt_service_setup(uint32_t service_id)
 {
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
 			.dequeue_timeout_ns = opt->deq_tmo_nsec,
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = info.max_num_events,
 			.nb_event_queue_flows = opt->nb_flows,
 			.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.sub_event_type == 0) { /* stage 0 from producer */
 			order_atq_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
 }
 
 static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].sub_event_type == 0) { /*stage 0 */
 				order_atq_process_stage_0(&ev[i]);
 			} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_atq_worker_burst(arg);
-	else
-		return order_atq_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_atq_worker_burst(arg, true);
+		else
+			return order_atq_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_atq_worker(arg, true);
+		else
+			return order_atq_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
 		const uint32_t flow = (uintptr_t)m % nb_flows;
 		/* Maintain seq number per flow */
 		m->seqn = producer_flow_seq[flow]++;
+		m->udata64 = flow;
 
 		ev.flow_id = flow;
 		ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.queue_id == 0) { /* from ordered queue */
 			order_queue_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
 }
 
 static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].queue_id == 0) { /* from ordered queue */
 				order_queue_process_stage_0(&ev[i]);
 			} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_queue_worker_burst(arg);
-	else
-		return order_queue_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_queue_worker_burst(arg, true);
+		else
+			return order_queue_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_queue_worker(arg, true);
+		else
+			return order_queue_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
 	if (!(info.event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		pconf.enqueue_depth = info.max_event_port_enqueue_depth;
-		pconf.disable_implicit_release = 1;
+		pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 		ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
 		TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
-		pconf.disable_implicit_release = 0;
+		pconf.event_port_cfg = 0;
 	}
 
 	ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index b5ae87a..07cd079 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
 		RTE_EVENT_DEV_CAP_BURST_MODE |
 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-		RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index f7383ca..95f03c8 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
-		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
+		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 
 }
 
@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 		DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
 	port_conf->enqueue_depth =
 		DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index e796975..933a5a5 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
 		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE|
-		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
+		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
 	};
 }
 
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 33cb502..6f242aa 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
 					RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
 					RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 					RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-					RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 
 }
 
@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = edev->max_num_events;
 	port_conf->dequeue_depth = 1;
 	port_conf->enqueue_depth = 1;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static void
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 256b6a5..b31c26e 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
 					RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
 					RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 					RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-					RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static void
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9b2f75f..3050578 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 		.max_event_port_dequeue_depth = MAX_OPDL_CONS_Q_DEPTH,
 		.max_event_port_enqueue_depth = MAX_OPDL_CONS_Q_DEPTH,
 		.max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
-		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE,
+		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
+				 RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
 	};
 
 	*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index c889220..6fd1102 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
 	dev_info->max_num_events = (1ULL << 20);
 	dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
 					RTE_EVENT_DEV_CAP_BURST_MODE |
-					RTE_EVENT_DEV_CAP_EVENT_QOS;
+					RTE_EVENT_DEV_CAP_EVENT_QOS |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static int
@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = 32 * 1024;
 	port_conf->dequeue_depth = 16;
 	port_conf->enqueue_depth = 16;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static void
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index e310c8c..0d8013a 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -179,7 +179,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
 	}
 
 	p->inflight_max = conf->new_event_threshold;
-	p->implicit_release = !conf->disable_implicit_release;
+	p->implicit_release = !(conf->event_port_cfg &
+				RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 
 	/* check if ring exists, same as rx_worker above */
 	snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
@@ -501,7 +502,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = 1024;
 	port_conf->dequeue_depth = 16;
 	port_conf->enqueue_depth = 16;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static int
@@ -608,7 +609,8 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 				RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
 				RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 				RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-				RTE_EVENT_DEV_CAP_NONSEQ_MODE),
+				RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+				RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
 	};
 
 	*info = evdev_sw_info;
diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
index 38c21fa..4a7d823 100644
--- a/drivers/event/sw/sw_evdev_selftest.c
+++ b/drivers/event/sw/sw_evdev_selftest.c
@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
 			.new_event_threshold = 1024,
 			.dequeue_depth = 32,
 			.enqueue_depth = 64,
-			.disable_implicit_release = 0,
 	};
 	if (num_ports > MAX_PORTS)
 		return -1;
@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
 				.new_event_threshold = 128,
 				.dequeue_depth = 32,
 				.enqueue_depth = 64,
-				.disable_implicit_release = 0,
 		};
 		if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
 			printf("%d Error setting up port\n", __LINE__);
@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
 		.new_event_threshold = 128,
 		.dequeue_depth = 32,
 		.enqueue_depth = 64,
-		.disable_implicit_release = 0,
 	};
 	if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
 		printf("%d Error setting up port\n", __LINE__);
@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
 	 * only be initialized once - and this needs to be set for multiple runs
 	 */
 	conf.new_event_threshold = 512;
-	conf.disable_implicit_release = disable_implicit_release;
+	conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (rte_event_port_setup(evdev, 0, &conf) < 0) {
 		printf("Error setting up RX port\n");
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 1,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 			.schedule_type = cdata.queue_type,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
-		.nb_atomic_order_sequences = 1024,
+			.nb_atomic_order_sequences = 1024,
 	};
 	struct rte_event_queue_conf tx_q_conf = {
 			.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	disable_implicit_release = (dev_info.event_dev_cap &
 			RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
 
-	wkr_p_conf.disable_implicit_release = disable_implicit_release;
+	wkr_p_conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (dev_info.max_num_events < config.nb_events_limit)
 		config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
index 86287b4..cc27bbc 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
 		return ret;
 	}
 
-	pc->disable_implicit_release = 0;
+	pc->event_port_cfg = 0;
 	ret = rte_event_port_setup(dev_id, port_id, pc);
 	if (ret) {
 		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 557198f..322453c 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -438,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
 					dev_id);
 		return -EINVAL;
 	}
-	if (dev_conf->nb_event_queues > info.max_event_queues) {
-		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
-		dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+	if (dev_conf->nb_event_queues > info.max_event_queues +
+			info.max_single_link_event_port_queue_pairs) {
+		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
+				 dev_id, dev_conf->nb_event_queues,
+				 info.max_event_queues,
+				 info.max_single_link_event_port_queue_pairs);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queues -
+			dev_conf->nb_single_link_event_port_queues >
+			info.max_event_queues) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
+				 dev_id, dev_conf->nb_event_queues,
+				 dev_conf->nb_single_link_event_port_queues,
+				 info.max_event_queues);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_single_link_event_port_queues >
+			dev_conf->nb_event_queues) {
+		RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
+				 dev_id,
+				 dev_conf->nb_single_link_event_port_queues,
+				 dev_conf->nb_event_queues);
 		return -EINVAL;
 	}
 
@@ -449,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
 		RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
 		return -EINVAL;
 	}
-	if (dev_conf->nb_event_ports > info.max_event_ports) {
-		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
-		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+	if (dev_conf->nb_event_ports > info.max_event_ports +
+			info.max_single_link_event_port_queue_pairs) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
+				 dev_id, dev_conf->nb_event_ports,
+				 info.max_event_ports,
+				 info.max_single_link_event_port_queue_pairs);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_ports -
+			dev_conf->nb_single_link_event_port_queues
+			> info.max_event_ports) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
+				 dev_id, dev_conf->nb_event_ports,
+				 dev_conf->nb_single_link_event_port_queues,
+				 info.max_event_ports);
+		return -EINVAL;
+	}
+
+	if (dev_conf->nb_single_link_event_port_queues >
+	    dev_conf->nb_event_ports) {
+		RTE_EDEV_LOG_ERR(
+				 "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
+				 dev_id,
+				 dev_conf->nb_single_link_event_port_queues,
+				 dev_conf->nb_event_ports);
 		return -EINVAL;
 	}
 
@@ -738,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
 		return -EINVAL;
 	}
 
-	if (port_conf && port_conf->disable_implicit_release &&
+	if (port_conf &&
+	    (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
 	    !(dev->data->event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		RTE_EDEV_LOG_ERR(
@@ -831,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 	case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
 		*attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
 		break;
+	case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
+	{
+		uint32_t config;
+
+		config = dev->data->ports_cfg[port_id].event_port_cfg;
+		*attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
+		break;
+	}
 	default:
 		return -EINVAL;
 	};
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 7dc8323..ce1fc2c 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -291,6 +291,12 @@ struct rte_event;
  * single queue to each port or map a single queue to many port.
  */
 
+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
+/**< Event device preserves the flow ID from the enqueued
+ * event to the dequeued event if the flag is set. Otherwise,
+ * the content of this field is implementation dependent.
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -380,6 +386,10 @@ struct rte_event_dev_info {
 	 * event port by this device.
 	 * A device that does not support bulk enqueue will set this as 1.
 	 */
+	uint8_t max_event_port_links;
+	/**< Maximum number of queues that can be linked to a single event
+	 * port by this device.
+	 */
 	int32_t max_num_events;
 	/**< A *closed system* event dev has a limit on the number of events it
 	 * can manage at a time. An *open system* event dev does not have a
@@ -387,6 +397,12 @@ struct rte_event_dev_info {
 	 */
 	uint32_t event_dev_cap;
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	uint8_t max_single_link_event_port_queue_pairs;
+	/**< Maximum number of event ports and queues that are optimized for
+	 * (and only capable of) single-link configurations supported by this
+	 * device. These ports and queues are not accounted for in
+	 * max_event_ports or max_event_queues.
+	 */
 };
 
 /**
@@ -494,6 +510,14 @@ struct rte_event_dev_config {
 	 */
 	uint32_t event_dev_cfg;
 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+	uint8_t nb_single_link_event_port_queues;
+	/**< Number of event ports and queues that will be singly-linked to
+	 * each other. These are a subset of the overall event ports and
+	 * queues; this value cannot exceed *nb_event_ports* or
+	 * *nb_event_queues*. If the device has ports and queues that are
+	 * optimized for single-link usage, this field is a hint for how many
+	 * to allocate; otherwise, regular event ports and queues can be used.
+	 */
 };
 
 /**
@@ -519,7 +543,6 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf);
 
-
 /* Event queue specific APIs */
 
 /* Event queue configuration bitmap flags */
@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 
 /* Event port specific APIs */
 
+/* Event port configuration bitmap flags */
+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
+/**< Configure the port not to release outstanding events in
+ * rte_event_dev_dequeue_burst(). If set, all events received through
+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
+ */
+#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
+/**< This event port links only to a single event queue.
+ *
+ *  @see rte_event_port_setup(), rte_event_port_link()
+ */
+
 /** Event port configuration structure */
 struct rte_event_port_conf {
 	int32_t new_event_threshold;
@@ -698,13 +735,7 @@ struct rte_event_port_conf {
 	 * which previously supplied to rte_event_dev_configure().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
 	 */
-	uint8_t disable_implicit_release;
-	/**< Configure the port not to release outstanding events in
-	 * rte_event_dev_dequeue_burst(). If true, all events received through
-	 * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
-	 * RTE_EVENT_OP_FORWARD. Must be false when the device is not
-	 * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
-	 */
+	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
 };
 
 /**
@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
  * The new event threshold of the port
  */
 #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
+/**
+ * The implicit release disable attribute of the port
+ */
+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
 
 /**
  * Get an attribute from a port.
diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
index 443cd38..a3f9244 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
 	return -ENXIO;
 }
 
-
 /**
  * @internal
  * Wrapper for use by pci drivers as a .remove function to detach a event
diff --git a/lib/librte_eventdev/rte_eventdev_trace.h b/lib/librte_eventdev/rte_eventdev_trace.h
index 4de6341..5ec43d8 100644
--- a/lib/librte_eventdev/rte_eventdev_trace.h
+++ b/lib/librte_eventdev/rte_eventdev_trace.h
@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_u32(dev_conf->nb_event_port_dequeue_depth);
 	rte_trace_point_emit_u32(dev_conf->nb_event_port_enqueue_depth);
 	rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
+	rte_trace_point_emit_u8(dev_conf->nb_single_link_event_port_queues);
 	rte_trace_point_emit_int(rc);
 )
 
@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 	rte_trace_point_emit_int(rc);
 )
 
@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 	rte_trace_point_emit_ptr(conf_cb);
 	rte_trace_point_emit_int(rc);
 )
@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 )
 
 RTE_TRACE_POINT(
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3d9d0ca..2846d04 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -100,7 +100,6 @@ EXPERIMENTAL {
 	# added in 20.05
 	__rte_eventdev_trace_configure;
 	__rte_eventdev_trace_queue_setup;
-	__rte_eventdev_trace_port_setup;
 	__rte_eventdev_trace_port_link;
 	__rte_eventdev_trace_port_unlink;
 	__rte_eventdev_trace_start;
@@ -134,4 +133,7 @@ EXPERIMENTAL {
 	__rte_eventdev_trace_crypto_adapter_queue_pair_del;
 	__rte_eventdev_trace_crypto_adapter_start;
 	__rte_eventdev_trace_crypto_adapter_stop;
+
+	# changed in 20.11
+	__rte_eventdev_trace_port_setup;
 };
-- 
2.6.4


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2
    2020-10-14 21:36  9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
  2020-10-15 17:31  9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
@ 2020-10-15 18:07  9% ` Timothy McDaniel
  2020-10-15 18:07  1%   ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
                     ` (2 more replies)
  2 siblings, 3 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 18:07 UTC (permalink / raw)
  Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
	nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
	harry.van.haaren, hemant.agrawal, bruce.richardson

This series implements the eventdev ABI changes required by
the DLB and DLB2 PMDs. This ABI change was announced in the
20.08 release notes [1]. This patch was initially part of
the V1 DLB PMD patchset.

The DLB hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports are further restricted to a maximum of 1 linked queue.
3) It does not (currently) have the ability to carry the flow_id as part
of the event (QE) payload.

Due to the above, we would like to propose the following enhancements.

1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertize its capabilities so that applications can take
the appropriate actions based on those capabilities.

2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.

3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to another, with the remaining bits available for future
assignment.

Major changes since V1:
Reworded commit message, as requested
Fixed errors reported by clang
Added blub to relnotes announcing the changes contained in this patch
Removed ABI deprecation announcement
Resolved patch apply issues when applying to eventdev-next
Combined ABI patch and app/examples patch to remove dependencies

Testing showed no performance impact due to the flow_id template code
added to test app.

[1] http://mails.dpdk.org/archives/dev/2020-August/177261.html

Timothy McDaniel (3):
  eventdev: eventdev: express DLB/DLB2 PMD constraints
  doc: remove eventdev ABI change announcement
  doc: announce new eventdev ABI changes

 app/test-eventdev/evt_common.h                     | 11 ++++
 app/test-eventdev/test_order_atq.c                 | 28 +++++++---
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 +++++++---
 app/test/test_eventdev.c                           |  4 +-
 doc/guides/rel_notes/deprecation.rst               | 13 -----
 doc/guides/rel_notes/release_20_11.rst             |  8 +++
 drivers/event/dpaa/dpaa_eventdev.c                 |  3 +-
 drivers/event/dpaa2/dpaa2_eventdev.c               |  5 +-
 drivers/event/dsw/dsw_evdev.c                      |  3 +-
 drivers/event/octeontx/ssovf_evdev.c               |  5 +-
 drivers/event/octeontx2/otx2_evdev.c               |  3 +-
 drivers/event/opdl/opdl_evdev.c                    |  3 +-
 drivers/event/skeleton/skeleton_eventdev.c         |  5 +-
 drivers/event/sw/sw_evdev.c                        |  8 ++-
 drivers/event/sw/sw_evdev_selftest.c               |  6 +-
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +-
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++-
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +-
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++-
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +-
 lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
 lib/librte_eventdev/rte_eventdev.c                 | 65 +++++++++++++++++++---
 lib/librte_eventdev/rte_eventdev.h                 | 51 ++++++++++++++---
 lib/librte_eventdev/rte_eventdev_pmd_pci.h         |  1 -
 lib/librte_eventdev/rte_eventdev_trace.h           |  7 ++-
 lib/librte_eventdev/rte_eventdev_version.map       |  4 +-
 28 files changed, 221 insertions(+), 77 deletions(-)

-- 
2.6.4


^ permalink raw reply	[relevance 9%]

* [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes
  2020-10-15 17:31  9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
  2020-10-15 17:31  1%   ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
  2020-10-15 17:31  4%   ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
@ 2020-10-15 17:31 13%   ` Timothy McDaniel
  2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 17:31 UTC (permalink / raw)
  Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
	nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
	harry.van.haaren, hemant.agrawal, bruce.richardson

The eventdev ABI changes announced in 20.08 have been implemented
in 20.11. This commit announces the implementation of those changes, and
lists the data structures that were modified.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7878e8e..0f8ee2a 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -352,6 +352,14 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+* ``eventdev`` changes
+
+    * Following structures are modified to support DLB/DLB2 PMDs
+      and future extensions:
+
+    * ``rte_event_dev_info``
+    * ``rte_event_dev_config``
+    * ``rte_event_port_conf``
 
 Known Issues
 ------------
-- 
2.6.4


^ permalink raw reply	[relevance 13%]

* [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints
  2020-10-15 17:31  9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
@ 2020-10-15 17:31  1%   ` Timothy McDaniel
  2020-10-15 17:31  4%   ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
  2020-10-15 17:31 13%   ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
  2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 17:31 UTC (permalink / raw)
  Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
	nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
	harry.van.haaren, hemant.agrawal, bruce.richardson

This commit implements the eventdev ABI changes required by
the DLB/DLB2 PMDs.  Several data structures and constants are modified
or added in this patch, thereby requiring modifications to the
dependent apps and examples.

The DLB/DLB2 hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports a further restricted to a maximum of 1 linked queue.
3) DLB does not have the ability to carry the flow_id as part
   of the event (QE) payload. Note that the DLB2 hardware is capable of
   carrying the flow_id.

Following is a detailed description of the changes that have been made.

1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertize its capabilities so that applications can take
the appropriate actions based on those capabilities.

    struct rte_event_dev_info {
	uint32_t max_event_port_links;
	/**< Maximum number of queues that can be linked to a single event
	 * port by this device.
	 */

	uint8_t max_single_link_event_port_queue_pairs;
	/**< Maximum number of event ports and queues that are optimized for
	 * (and only capable of) single-link configurations supported by this
	 * device. These ports and queues are not accounted for in
	 * max_event_ports or max_event_queues.
	 */
    }

2) Add a new field to the rte_event_dev_config struct. This field allows the
application to specify how many of its ports are limited to a single link,
or will be used in single link mode.

    /** Event device configuration structure */
    struct rte_event_dev_config {
	uint8_t nb_single_link_event_port_queues;
	/**< Number of event ports and queues that will be singly-linked to
	 * each other. These are a subset of the overall event ports and
	 * queues; this value cannot exceed *nb_event_ports* or
	 * *nb_event_queues*. If the device has ports and queues that are
	 * optimized for single-link usage, this field is a hint for how many
	 * to allocate; otherwise, regular event ports and queues can be used.
	 */
    }

3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assiged to one bit, and a port-is-single-link-only  attribute is
assigned to other, with the remaining bits available for future assignment.

	* Event port configuration bitmap flags */
	#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
	/**< Configure the port not to release outstanding events in
	 * rte_event_dev_dequeue_burst(). If set, all events received through
	 * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
	 * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
	 * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
	 */
	#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)

	/**< This event port links only to a single event queue.
	 *
	 *  @see rte_event_port_setup(), rte_event_port_link()
	 */

	#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
	/**
	 * The implicit release disable attribute of the port
	 */

	struct rte_event_port_conf {
		uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
	}

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
 app/test-eventdev/evt_common.h                     | 11 ++++
 app/test-eventdev/test_order_atq.c                 | 28 +++++++---
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 +++++++---
 app/test/test_eventdev.c                           |  4 +-
 drivers/event/dpaa/dpaa_eventdev.c                 |  3 +-
 drivers/event/dpaa2/dpaa2_eventdev.c               |  5 +-
 drivers/event/dsw/dsw_evdev.c                      |  3 +-
 drivers/event/octeontx/ssovf_evdev.c               |  5 +-
 drivers/event/octeontx2/otx2_evdev.c               |  3 +-
 drivers/event/opdl/opdl_evdev.c                    |  3 +-
 drivers/event/skeleton/skeleton_eventdev.c         |  5 +-
 drivers/event/sw/sw_evdev.c                        |  8 ++-
 drivers/event/sw/sw_evdev_selftest.c               |  6 +-
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +-
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++-
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +-
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++-
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +-
 lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
 lib/librte_eventdev/rte_eventdev.c                 | 65 +++++++++++++++++++---
 lib/librte_eventdev/rte_eventdev.h                 | 51 ++++++++++++++---
 lib/librte_eventdev/rte_eventdev_pmd_pci.h         |  1 -
 lib/librte_eventdev/rte_eventdev_trace.h           |  7 ++-
 lib/librte_eventdev/rte_eventdev_version.map       |  4 +-
 26 files changed, 213 insertions(+), 64 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
 			true : false;
 }
 
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+	struct rte_event_dev_info dev_info;
+
+	rte_event_dev_info_get(dev_id, &dev_info);
+	return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+			true : false;
+}
+
 static inline int
 evt_service_setup(uint32_t service_id)
 {
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
 			.dequeue_timeout_ns = opt->deq_tmo_nsec,
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = info.max_num_events,
 			.nb_event_queue_flows = opt->nb_flows,
 			.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.sub_event_type == 0) { /* stage 0 from producer */
 			order_atq_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
 }
 
 static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].sub_event_type == 0) { /*stage 0 */
 				order_atq_process_stage_0(&ev[i]);
 			} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_atq_worker_burst(arg);
-	else
-		return order_atq_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_atq_worker_burst(arg, true);
+		else
+			return order_atq_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_atq_worker(arg, true);
+		else
+			return order_atq_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
 		const uint32_t flow = (uintptr_t)m % nb_flows;
 		/* Maintain seq number per flow */
 		m->seqn = producer_flow_seq[flow]++;
+		m->udata64 = flow;
 
 		ev.flow_id = flow;
 		ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.queue_id == 0) { /* from ordered queue */
 			order_queue_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
 }
 
 static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].queue_id == 0) { /* from ordered queue */
 				order_queue_process_stage_0(&ev[i]);
 			} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_queue_worker_burst(arg);
-	else
-		return order_queue_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_queue_worker_burst(arg, true);
+		else
+			return order_queue_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_queue_worker(arg, true);
+		else
+			return order_queue_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
 	if (!(info.event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		pconf.enqueue_depth = info.max_event_port_enqueue_depth;
-		pconf.disable_implicit_release = 1;
+		pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 		ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
 		TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
-		pconf.disable_implicit_release = 0;
+		pconf.event_port_cfg = 0;
 	}
 
 	ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index b5ae87a..07cd079 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
 		RTE_EVENT_DEV_CAP_BURST_MODE |
 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-		RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index f7383ca..95f03c8 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
-		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
+		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 
 }
 
@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 		DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
 	port_conf->enqueue_depth =
 		DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index e796975..933a5a5 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
 		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE|
-		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
+		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
 	};
 }
 
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 33cb502..6f242aa 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
 					RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
 					RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 					RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-					RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 
 }
 
@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = edev->max_num_events;
 	port_conf->dequeue_depth = 1;
 	port_conf->enqueue_depth = 1;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static void
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 256b6a5..b31c26e 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
 					RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
 					RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 					RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-					RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static void
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9b2f75f..3050578 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 		.max_event_port_dequeue_depth = MAX_OPDL_CONS_Q_DEPTH,
 		.max_event_port_enqueue_depth = MAX_OPDL_CONS_Q_DEPTH,
 		.max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
-		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE,
+		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
+				 RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
 	};
 
 	*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index c889220..6fd1102 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
 	dev_info->max_num_events = (1ULL << 20);
 	dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
 					RTE_EVENT_DEV_CAP_BURST_MODE |
-					RTE_EVENT_DEV_CAP_EVENT_QOS;
+					RTE_EVENT_DEV_CAP_EVENT_QOS |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static int
@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = 32 * 1024;
 	port_conf->dequeue_depth = 16;
 	port_conf->enqueue_depth = 16;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static void
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index e310c8c..0d8013a 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -179,7 +179,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
 	}
 
 	p->inflight_max = conf->new_event_threshold;
-	p->implicit_release = !conf->disable_implicit_release;
+	p->implicit_release = !(conf->event_port_cfg &
+				RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 
 	/* check if ring exists, same as rx_worker above */
 	snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
@@ -501,7 +502,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = 1024;
 	port_conf->dequeue_depth = 16;
 	port_conf->enqueue_depth = 16;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static int
@@ -608,7 +609,8 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 				RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
 				RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 				RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-				RTE_EVENT_DEV_CAP_NONSEQ_MODE),
+				RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+				RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
 	};
 
 	*info = evdev_sw_info;
diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
index 38c21fa..4a7d823 100644
--- a/drivers/event/sw/sw_evdev_selftest.c
+++ b/drivers/event/sw/sw_evdev_selftest.c
@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
 			.new_event_threshold = 1024,
 			.dequeue_depth = 32,
 			.enqueue_depth = 64,
-			.disable_implicit_release = 0,
 	};
 	if (num_ports > MAX_PORTS)
 		return -1;
@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
 				.new_event_threshold = 128,
 				.dequeue_depth = 32,
 				.enqueue_depth = 64,
-				.disable_implicit_release = 0,
 		};
 		if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
 			printf("%d Error setting up port\n", __LINE__);
@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
 		.new_event_threshold = 128,
 		.dequeue_depth = 32,
 		.enqueue_depth = 64,
-		.disable_implicit_release = 0,
 	};
 	if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
 		printf("%d Error setting up port\n", __LINE__);
@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
 	 * only be initialized once - and this needs to be set for multiple runs
 	 */
 	conf.new_event_threshold = 512;
-	conf.disable_implicit_release = disable_implicit_release;
+	conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (rte_event_port_setup(evdev, 0, &conf) < 0) {
 		printf("Error setting up RX port\n");
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 1,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 			.schedule_type = cdata.queue_type,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
-		.nb_atomic_order_sequences = 1024,
+			.nb_atomic_order_sequences = 1024,
 	};
 	struct rte_event_queue_conf tx_q_conf = {
 			.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	disable_implicit_release = (dev_info.event_dev_cap &
 			RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
 
-	wkr_p_conf.disable_implicit_release = disable_implicit_release;
+	wkr_p_conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (dev_info.max_num_events < config.nb_events_limit)
 		config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
index 86287b4..cc27bbc 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
 		return ret;
 	}
 
-	pc->disable_implicit_release = 0;
+	pc->event_port_cfg = 0;
 	ret = rte_event_port_setup(dev_id, port_id, pc);
 	if (ret) {
 		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 557198f..322453c 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -438,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
 					dev_id);
 		return -EINVAL;
 	}
-	if (dev_conf->nb_event_queues > info.max_event_queues) {
-		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
-		dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+	if (dev_conf->nb_event_queues > info.max_event_queues +
+			info.max_single_link_event_port_queue_pairs) {
+		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
+				 dev_id, dev_conf->nb_event_queues,
+				 info.max_event_queues,
+				 info.max_single_link_event_port_queue_pairs);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queues -
+			dev_conf->nb_single_link_event_port_queues >
+			info.max_event_queues) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
+				 dev_id, dev_conf->nb_event_queues,
+				 dev_conf->nb_single_link_event_port_queues,
+				 info.max_event_queues);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_single_link_event_port_queues >
+			dev_conf->nb_event_queues) {
+		RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
+				 dev_id,
+				 dev_conf->nb_single_link_event_port_queues,
+				 dev_conf->nb_event_queues);
 		return -EINVAL;
 	}
 
@@ -449,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
 		RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
 		return -EINVAL;
 	}
-	if (dev_conf->nb_event_ports > info.max_event_ports) {
-		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
-		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+	if (dev_conf->nb_event_ports > info.max_event_ports +
+			info.max_single_link_event_port_queue_pairs) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
+				 dev_id, dev_conf->nb_event_ports,
+				 info.max_event_ports,
+				 info.max_single_link_event_port_queue_pairs);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_ports -
+			dev_conf->nb_single_link_event_port_queues
+			> info.max_event_ports) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
+				 dev_id, dev_conf->nb_event_ports,
+				 dev_conf->nb_single_link_event_port_queues,
+				 info.max_event_ports);
+		return -EINVAL;
+	}
+
+	if (dev_conf->nb_single_link_event_port_queues >
+	    dev_conf->nb_event_ports) {
+		RTE_EDEV_LOG_ERR(
+				 "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
+				 dev_id,
+				 dev_conf->nb_single_link_event_port_queues,
+				 dev_conf->nb_event_ports);
 		return -EINVAL;
 	}
 
@@ -738,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
 		return -EINVAL;
 	}
 
-	if (port_conf && port_conf->disable_implicit_release &&
+	if (port_conf &&
+	    (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
 	    !(dev->data->event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		RTE_EDEV_LOG_ERR(
@@ -831,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 	case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
 		*attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
 		break;
+	case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
+	{
+		uint32_t config;
+
+		config = dev->data->ports_cfg[port_id].event_port_cfg;
+		*attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
+		break;
+	}
 	default:
 		return -EINVAL;
 	};
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 7dc8323..ce1fc2c 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -291,6 +291,12 @@ struct rte_event;
  * single queue to each port or map a single queue to many port.
  */
 
+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
+/**< Event device preserves the flow ID from the enqueued
+ * event to the dequeued event if the flag is set. Otherwise,
+ * the content of this field is implementation dependent.
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -380,6 +386,10 @@ struct rte_event_dev_info {
 	 * event port by this device.
 	 * A device that does not support bulk enqueue will set this as 1.
 	 */
+	uint8_t max_event_port_links;
+	/**< Maximum number of queues that can be linked to a single event
+	 * port by this device.
+	 */
 	int32_t max_num_events;
 	/**< A *closed system* event dev has a limit on the number of events it
 	 * can manage at a time. An *open system* event dev does not have a
@@ -387,6 +397,12 @@ struct rte_event_dev_info {
 	 */
 	uint32_t event_dev_cap;
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	uint8_t max_single_link_event_port_queue_pairs;
+	/**< Maximum number of event ports and queues that are optimized for
+	 * (and only capable of) single-link configurations supported by this
+	 * device. These ports and queues are not accounted for in
+	 * max_event_ports or max_event_queues.
+	 */
 };
 
 /**
@@ -494,6 +510,14 @@ struct rte_event_dev_config {
 	 */
 	uint32_t event_dev_cfg;
 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+	uint8_t nb_single_link_event_port_queues;
+	/**< Number of event ports and queues that will be singly-linked to
+	 * each other. These are a subset of the overall event ports and
+	 * queues; this value cannot exceed *nb_event_ports* or
+	 * *nb_event_queues*. If the device has ports and queues that are
+	 * optimized for single-link usage, this field is a hint for how many
+	 * to allocate; otherwise, regular event ports and queues can be used.
+	 */
 };
 
 /**
@@ -519,7 +543,6 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf);
 
-
 /* Event queue specific APIs */
 
 /* Event queue configuration bitmap flags */
@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 
 /* Event port specific APIs */
 
+/* Event port configuration bitmap flags */
+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
+/**< Configure the port not to release outstanding events in
+ * rte_event_dev_dequeue_burst(). If set, all events received through
+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
+ */
+#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
+/**< This event port links only to a single event queue.
+ *
+ *  @see rte_event_port_setup(), rte_event_port_link()
+ */
+
 /** Event port configuration structure */
 struct rte_event_port_conf {
 	int32_t new_event_threshold;
@@ -698,13 +735,7 @@ struct rte_event_port_conf {
 	 * which previously supplied to rte_event_dev_configure().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
 	 */
-	uint8_t disable_implicit_release;
-	/**< Configure the port not to release outstanding events in
-	 * rte_event_dev_dequeue_burst(). If true, all events received through
-	 * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
-	 * RTE_EVENT_OP_FORWARD. Must be false when the device is not
-	 * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
-	 */
+	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
 };
 
 /**
@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
  * The new event threshold of the port
  */
 #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
+/**
+ * The implicit release disable attribute of the port
+ */
+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
 
 /**
  * Get an attribute from a port.
diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
index 443cd38..a3f9244 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
 	return -ENXIO;
 }
 
-
 /**
  * @internal
  * Wrapper for use by pci drivers as a .remove function to detach a event
diff --git a/lib/librte_eventdev/rte_eventdev_trace.h b/lib/librte_eventdev/rte_eventdev_trace.h
index 4de6341..5ec43d8 100644
--- a/lib/librte_eventdev/rte_eventdev_trace.h
+++ b/lib/librte_eventdev/rte_eventdev_trace.h
@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_u32(dev_conf->nb_event_port_dequeue_depth);
 	rte_trace_point_emit_u32(dev_conf->nb_event_port_enqueue_depth);
 	rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
+	rte_trace_point_emit_u8(dev_conf->nb_single_link_event_port_queues);
 	rte_trace_point_emit_int(rc);
 )
 
@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 	rte_trace_point_emit_int(rc);
 )
 
@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 	rte_trace_point_emit_ptr(conf_cb);
 	rte_trace_point_emit_int(rc);
 )
@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 )
 
 RTE_TRACE_POINT(
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3d9d0ca..2846d04 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -100,7 +100,6 @@ EXPERIMENTAL {
 	# added in 20.05
 	__rte_eventdev_trace_configure;
 	__rte_eventdev_trace_queue_setup;
-	__rte_eventdev_trace_port_setup;
 	__rte_eventdev_trace_port_link;
 	__rte_eventdev_trace_port_unlink;
 	__rte_eventdev_trace_start;
@@ -134,4 +133,7 @@ EXPERIMENTAL {
 	__rte_eventdev_trace_crypto_adapter_queue_pair_del;
 	__rte_eventdev_trace_crypto_adapter_start;
 	__rte_eventdev_trace_crypto_adapter_stop;
+
+	# changed in 20.11
+	__rte_eventdev_trace_port_setup;
 };
-- 
2.6.4


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement
  2020-10-15 17:31  9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
  2020-10-15 17:31  1%   ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
@ 2020-10-15 17:31  4%   ` Timothy McDaniel
  2020-10-15 17:31 13%   ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
  2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 17:31 UTC (permalink / raw)
  Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
	nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
	harry.van.haaren, hemant.agrawal, bruce.richardson

The announcement made in 20.08 is no longer required.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 doc/guides/rel_notes/deprecation.rst | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index efd7710..08f1c04 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -189,19 +189,6 @@ Deprecation Notices
   ``rte_cryptodev_scheduler_worker_detach`` and
   ``rte_cryptodev_scheduler_workers_get`` accordingly.
 
-* eventdev: Following structures will be modified to support DLB PMD
-  and future extensions:
-
-  - ``rte_event_dev_info``
-  - ``rte_event_dev_config``
-  - ``rte_event_port_conf``
-
-  Patches containing justification, documentation, and proposed modifications
-  can be found at:
-
-  - https://patches.dpdk.org/patch/71457/
-  - https://patches.dpdk.org/patch/71456/
-
 * sched: To allow more traffic classes, flexible mapping of pipe queues to
   traffic classes, and subport level configuration of pipes and queues
   changes will be made to macros, data structures and API functions defined
-- 
2.6.4


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2
    2020-10-14 21:36  9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
@ 2020-10-15 17:31  9% ` Timothy McDaniel
  2020-10-15 17:31  1%   ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
                     ` (2 more replies)
  2020-10-15 18:07  9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
  2 siblings, 3 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 17:31 UTC (permalink / raw)
  Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
	nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
	harry.van.haaren, hemant.agrawal, bruce.richardson

This series implements the eventdev ABI changes required by
the DLB and DLB2 PMDs. This ABI change was announced in the
20.08 release notes [1]. This patch was initially part of
the V1 DLB PMD patchset.

The DLB hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports are further restricted to a maximum of 1 linked queue.
3) It does not (currently) have the ability to carry the flow_id as part
of the event (QE) payload.

Due to the above, we would like to propose the following enhancements.

1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertise its capabilities so that applications can take
the appropriate actions based on those capabilities.

2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.

3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to another, with the remaining bits available for future
assignment.

Major changes since V1:
Reworded commit message, as requested
Fixed errors reported by clang
Added blub to relnotes announcing the changes contained in this patch
Removed ABI deprecation announcement
Resolved patch apply issues when applying to eventdev-next
Combined ABI patch and app/examples patch to remove bi-directional dependency

Testing showed no performance impact due to the flow_id template code
added to test app.

[1] http://mails.dpdk.org/archives/dev/2020-August/177261.html

Timothy McDaniel (3):
  eventdev: eventdev: express DLB/DLB2 PMD constraints
  doc: remove eventdev ABI change announcement
  doc: announce new eventdev ABI changes

 app/test-eventdev/evt_common.h                     | 11 ++++
 app/test-eventdev/test_order_atq.c                 | 28 +++++++---
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 +++++++---
 app/test/test_eventdev.c                           |  4 +-
 doc/guides/rel_notes/deprecation.rst               | 13 -----
 doc/guides/rel_notes/release_20_11.rst             |  8 +++
 drivers/event/dpaa/dpaa_eventdev.c                 |  3 +-
 drivers/event/dpaa2/dpaa2_eventdev.c               |  5 +-
 drivers/event/dsw/dsw_evdev.c                      |  3 +-
 drivers/event/octeontx/ssovf_evdev.c               |  5 +-
 drivers/event/octeontx2/otx2_evdev.c               |  3 +-
 drivers/event/opdl/opdl_evdev.c                    |  3 +-
 drivers/event/skeleton/skeleton_eventdev.c         |  5 +-
 drivers/event/sw/sw_evdev.c                        |  8 ++-
 drivers/event/sw/sw_evdev_selftest.c               |  6 +-
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +-
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++-
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +-
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++-
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +-
 lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
 lib/librte_eventdev/rte_eventdev.c                 | 65 +++++++++++++++++++---
 lib/librte_eventdev/rte_eventdev.h                 | 51 ++++++++++++++---
 lib/librte_eventdev/rte_eventdev_pmd_pci.h         |  1 -
 lib/librte_eventdev/rte_eventdev_trace.h           |  7 ++-
 lib/librte_eventdev/rte_eventdev_version.map       |  4 +-
 28 files changed, 221 insertions(+), 77 deletions(-)

-- 
2.6.4


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines
  2020-10-15 15:32  0%             ` Luca Boccassi
@ 2020-10-15 15:34  0%               ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-10-15 15:34 UTC (permalink / raw)
  To: Luca Boccassi; +Cc: dev, david.marchand, arybchenko, ferruh.yigit, thomas

On Thu, Oct 15, 2020 at 04:32:35PM +0100, Luca Boccassi wrote:
> On Thu, 2020-10-15 at 15:03 +0100, Bruce Richardson wrote:
> > On Thu, Oct 15, 2020 at 02:05:37PM +0100, Luca Boccassi wrote:
> > > On Thu, 2020-10-15 at 12:18 +0100, Bruce Richardson wrote:
> > > > On Thu, Oct 15, 2020 at 11:30:29AM +0100, Luca Boccassi wrote:
> > > > > On Wed, 2020-10-14 at 15:13 +0100, Bruce Richardson wrote:
> > > > > > As discussed on the dpdk-dev mailing list[1], we can make some easy
> > > > > > improvements in standardizing the naming of the various components in DPDK,
> > > > > > and their associated feature-enabled macros.
> > > > > > 
> > > > > > Following this patch, each library will have the name in format,
> > > > > > 'librte_<name>.so', and the macro indicating that library is enabled in the
> > > > > > build will have the form 'RTE_LIB_<NAME>'.
> > > > > > 
> > > > > > Similarly, for libraries, the equivalent name formats and macros are:
> > > > > > 'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
> > > > > > device type taken from the relevant driver subdirectory name, i.e. 'net',
> > > > > > 'crypto' etc.
> > > > > > 
> > > > > > To avoid too many changes at once for end applications, the old macro names
> > > > > > will still be provided in the build in this release, but will be removed
> > > > > > subsequently.
> > > > > > 
> > > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > > > 
> > > > > > [1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
> > > > > > ---
> > > > > >  app/test-bbdev/meson.build            |  4 ++--
> > > > > >  app/test-crypto-perf/meson.build      |  2 +-
> > > > > >  app/test-pmd/meson.build              | 12 ++++++------
> > > > > >  app/test/meson.build                  |  8 ++++----
> > > > > >  doc/guides/rel_notes/deprecation.rst  |  8 ++++++++
> > > > > >  drivers/baseband/meson.build          |  1 -
> > > > > >  drivers/bus/meson.build               |  1 -
> > > > > >  drivers/common/meson.build            |  1 -
> > > > > >  drivers/common/mlx5/meson.build       |  1 -
> > > > > >  drivers/common/qat/meson.build        |  1 -
> > > > > >  drivers/compress/meson.build          |  1 -
> > > > > >  drivers/compress/octeontx/meson.build |  2 +-
> > > > > >  drivers/crypto/meson.build            |  1 -
> > > > > >  drivers/crypto/null/meson.build       |  2 +-
> > > > > >  drivers/crypto/octeontx/meson.build   |  2 +-
> > > > > >  drivers/crypto/octeontx2/meson.build  |  2 +-
> > > > > >  drivers/crypto/scheduler/meson.build  |  2 +-
> > > > > >  drivers/crypto/virtio/meson.build     |  2 +-
> > > > > >  drivers/event/dpaa/meson.build        |  2 +-
> > > > > >  drivers/event/dpaa2/meson.build       |  2 +-
> > > > > >  drivers/event/meson.build             |  1 -
> > > > > >  drivers/event/octeontx/meson.build    |  2 +-
> > > > > >  drivers/event/octeontx2/meson.build   |  2 +-
> > > > > >  drivers/mempool/meson.build           |  1 -
> > > > > >  drivers/meson.build                   |  9 ++++-----
> > > > > >  drivers/net/meson.build               |  1 -
> > > > > >  drivers/net/mlx4/meson.build          |  2 +-
> > > > > >  drivers/raw/ifpga/meson.build         |  2 +-
> > > > > >  drivers/raw/meson.build               |  1 -
> > > > > >  drivers/regex/meson.build             |  1 -
> > > > > >  drivers/vdpa/meson.build              |  1 -
> > > > > >  examples/bond/meson.build             |  2 +-
> > > > > >  examples/ethtool/meson.build          |  2 +-
> > > > > >  examples/ioat/meson.build             |  2 +-
> > > > > >  examples/l2fwd-crypto/meson.build     |  2 +-
> > > > > >  examples/ntb/meson.build              |  2 +-
> > > > > >  examples/vm_power_manager/meson.build |  6 +++---
> > > > > >  lib/librte_ethdev/meson.build         |  1 -
> > > > > >  lib/librte_graph/meson.build          |  2 --
> > > > > >  lib/meson.build                       |  3 ++-
> > > > > >  40 files changed, 47 insertions(+), 55 deletions(-)
> > > > > 
> > > > > Does this change the share object file names too, or only the macros?
> > > > > 
> > > > 
> > > > It does indeed change the object name files, which is a little bit
> > > > concerning. However, the consensus based on the RFC seemed to be that the
> > > > benefit is likely worth the change. If we want, we can look to use symlinks
> > > > to the old names on install, but I think that just delays the pain since I
> > > > would expect few to actually change their build to the new names until the
> > > > old ones and the symlinks completely go away.
> > > > 
> > > > /Bruce
> > > 
> > > It is a backward incompatible change, so we need to provide symlinks,
> > > right? On upgrade, programs linked to librte_old.so will fail to start.
> > > Or was this targeted at 20.11 thus piggy-backing on the ABI change
> > > which forces a re-link?
> > > 
> > More of the latter, and the fact that changing the build system involved a
> > few library renames anyway for those using make. Since the ABI is changing
> > this release, and all the libs have a new major version number there is no
> > requirement for libs linked against an older version to work, and since
> > pkg-config should now be used for linking the actual names should not be
> > a concern.
> > 
> > That's the thinking anyway. :-)
> > 
> > /Bruce
> 
> Ok that makes sense, I wasn't sure if this series was targeted for
> 20.11 or for later. In that case,
> 
> Acked-by: Luca Boccassi <bluca@debian.org>
>

Yes, if it doesn't make 20.11 we'll have to re-evaluate and be stricter
with the compatibility constraints. It might not be worth doing post-20.11. 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines
  2020-10-15 14:03  3%           ` Bruce Richardson
@ 2020-10-15 15:32  0%             ` Luca Boccassi
  2020-10-15 15:34  0%               ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Luca Boccassi @ 2020-10-15 15:32 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, david.marchand, arybchenko, ferruh.yigit, thomas

On Thu, 2020-10-15 at 15:03 +0100, Bruce Richardson wrote:
> On Thu, Oct 15, 2020 at 02:05:37PM +0100, Luca Boccassi wrote:
> > On Thu, 2020-10-15 at 12:18 +0100, Bruce Richardson wrote:
> > > On Thu, Oct 15, 2020 at 11:30:29AM +0100, Luca Boccassi wrote:
> > > > On Wed, 2020-10-14 at 15:13 +0100, Bruce Richardson wrote:
> > > > > As discussed on the dpdk-dev mailing list[1], we can make some easy
> > > > > improvements in standardizing the naming of the various components in DPDK,
> > > > > and their associated feature-enabled macros.
> > > > > 
> > > > > Following this patch, each library will have the name in format,
> > > > > 'librte_<name>.so', and the macro indicating that library is enabled in the
> > > > > build will have the form 'RTE_LIB_<NAME>'.
> > > > > 
> > > > > Similarly, for libraries, the equivalent name formats and macros are:
> > > > > 'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
> > > > > device type taken from the relevant driver subdirectory name, i.e. 'net',
> > > > > 'crypto' etc.
> > > > > 
> > > > > To avoid too many changes at once for end applications, the old macro names
> > > > > will still be provided in the build in this release, but will be removed
> > > > > subsequently.
> > > > > 
> > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > > 
> > > > > [1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
> > > > > ---
> > > > >  app/test-bbdev/meson.build            |  4 ++--
> > > > >  app/test-crypto-perf/meson.build      |  2 +-
> > > > >  app/test-pmd/meson.build              | 12 ++++++------
> > > > >  app/test/meson.build                  |  8 ++++----
> > > > >  doc/guides/rel_notes/deprecation.rst  |  8 ++++++++
> > > > >  drivers/baseband/meson.build          |  1 -
> > > > >  drivers/bus/meson.build               |  1 -
> > > > >  drivers/common/meson.build            |  1 -
> > > > >  drivers/common/mlx5/meson.build       |  1 -
> > > > >  drivers/common/qat/meson.build        |  1 -
> > > > >  drivers/compress/meson.build          |  1 -
> > > > >  drivers/compress/octeontx/meson.build |  2 +-
> > > > >  drivers/crypto/meson.build            |  1 -
> > > > >  drivers/crypto/null/meson.build       |  2 +-
> > > > >  drivers/crypto/octeontx/meson.build   |  2 +-
> > > > >  drivers/crypto/octeontx2/meson.build  |  2 +-
> > > > >  drivers/crypto/scheduler/meson.build  |  2 +-
> > > > >  drivers/crypto/virtio/meson.build     |  2 +-
> > > > >  drivers/event/dpaa/meson.build        |  2 +-
> > > > >  drivers/event/dpaa2/meson.build       |  2 +-
> > > > >  drivers/event/meson.build             |  1 -
> > > > >  drivers/event/octeontx/meson.build    |  2 +-
> > > > >  drivers/event/octeontx2/meson.build   |  2 +-
> > > > >  drivers/mempool/meson.build           |  1 -
> > > > >  drivers/meson.build                   |  9 ++++-----
> > > > >  drivers/net/meson.build               |  1 -
> > > > >  drivers/net/mlx4/meson.build          |  2 +-
> > > > >  drivers/raw/ifpga/meson.build         |  2 +-
> > > > >  drivers/raw/meson.build               |  1 -
> > > > >  drivers/regex/meson.build             |  1 -
> > > > >  drivers/vdpa/meson.build              |  1 -
> > > > >  examples/bond/meson.build             |  2 +-
> > > > >  examples/ethtool/meson.build          |  2 +-
> > > > >  examples/ioat/meson.build             |  2 +-
> > > > >  examples/l2fwd-crypto/meson.build     |  2 +-
> > > > >  examples/ntb/meson.build              |  2 +-
> > > > >  examples/vm_power_manager/meson.build |  6 +++---
> > > > >  lib/librte_ethdev/meson.build         |  1 -
> > > > >  lib/librte_graph/meson.build          |  2 --
> > > > >  lib/meson.build                       |  3 ++-
> > > > >  40 files changed, 47 insertions(+), 55 deletions(-)
> > > > 
> > > > Does this change the share object file names too, or only the macros?
> > > > 
> > > 
> > > It does indeed change the object name files, which is a little bit
> > > concerning. However, the consensus based on the RFC seemed to be that the
> > > benefit is likely worth the change. If we want, we can look to use symlinks
> > > to the old names on install, but I think that just delays the pain since I
> > > would expect few to actually change their build to the new names until the
> > > old ones and the symlinks completely go away.
> > > 
> > > /Bruce
> > 
> > It is a backward incompatible change, so we need to provide symlinks,
> > right? On upgrade, programs linked to librte_old.so will fail to start.
> > Or was this targeted at 20.11 thus piggy-backing on the ABI change
> > which forces a re-link?
> > 
> More of the latter, and the fact that changing the build system involved a
> few library renames anyway for those using make. Since the ABI is changing
> this release, and all the libs have a new major version number there is no
> requirement for libs linked against an older version to work, and since
> pkg-config should now be used for linking the actual names should not be
> a concern.
> 
> That's the thinking anyway. :-)
> 
> /Bruce

Ok that makes sense, I wasn't sure if this series was targeted for
20.11 or for later. In that case,

Acked-by: Luca Boccassi <bluca@debian.org>

-- 
Kind regards,
Luca Boccassi

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 11:09  0%             ` Andrew Rybchenko
@ 2020-10-15 14:39  0%               ` Slava Ovsiienko
  0 siblings, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-10-15 14:39 UTC (permalink / raw)
  To: Andrew Rybchenko, Jerin Jacob
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
	Andrew Rybchenko

Hi, Andrew

> >> At least there are few simple limitations which are easy to
> >> express:
> >>  1. Maximum number of segments
> > We have scatter capability and we do not report the maximal number of
> > segments, it is on PMD own. We could add the field to the
> > rte_eth_dev_info, but not sure whether we have something special to report
> there even for mlx5 case.
> 
> There is always a limitation in programming and HW. Nothing is unlimited.
> Limits could be high, but still exist.
> Number of descriptors? Width of field in HW interface?
> Maximum length of the config message to HW?
> All above could limit it directly or indirectly.

None of above is applicable to mlx5 buffer split feature - it just adjusts the Rx buffer pointers
and segment sizes, no anything beyond generic limitation - the queue descriptor numbers
and mbuf buffer size. Suppose the most of HW by other vendors is capable to support
buffer split feature with similar generic limitations.

> 
> >>  2. Possibility to use the last segment many times if required
> >>     (I was suggesting to use scatter for it, but you rejected
> >>      the idea - may be time to reconsider :) )
> >
> > Mmm, sorry I do not follow, it might be I did not understand/missed your
> idea.
> > Some of the last segment attributes are used multiple times to scatter
> > the rest of the data in fashion very close to the existing scattering
> > approach - at least, pool and buffer size from this pool are used. The
> > beginning of the packet scattered according to the new descriptions,
> > the rest of the packet - according to the existing regular scattering
> > with pool settings from the last segment description.
> 
> I believe that the possibility to split into a fixed segments
> (BUFFER_SPLIT) and possibility to use a mempool (just mp or last segment)
> many times if a packet does not fit (SCATTER) it is *different* features.

Sorry, what do you mean "use mempool many times"? Allocate multiple
mbufs from the same mempool and build the chain of them? 

We have SCATTER offload and many PMDs advertise that. 
Scattering is actually the split, the split happens on some well-defined points
to the mbufs from the same pool. BUFFER_SPLIT just extends SCATTER
capabilities by providing the split point arbitrary settings and multiple
pools.

> I can easily imagine HW which could do BUFFER_SPLIT to fixed segments, but
> cannot use the last segment many times (i.e. no classical SCATTER).

Sorry, what do you mean "BUFFER_SPLIT to fixed segments" ?
This new offload BUFFER_SPLIT  is intended to push data to flexible segments,
potentially allocated from the different pools. The HW can be constrained
with pool number (say it supports some pool alloc/free hardware accelerator
for single pool only), in this case it will not be able to support BUFFER_SPLIT
in multiple pool config, but using the single pool does not arise the problem.

It seems I missed something, could you, please, provide an example,
how would you like to see the usage last segment many times for BUFFER_SPLIT?
How the packet should be split, in mbufs with what (last segment inherited) attributes?

> 
> >
> >  3. Maximum offset
> >>     Frankly speaking I'm not sure why it cannot be handled on
> >>     PMD level (i.e. provide descriptors with offset taken into
> >>     account or guarantee that HW mempool objects initialized
> >>     correctly with required headroom). May be in some corner
> >>     cases when the same HW mempool is shared by various
> >>     segments with different offset requirements.
> >
> > HW offsets are beyond the feature scope, the offsets in the segment
> > description is supposed to be added to the native pool offsets (if any).
> 
> Are you saying that offsets are not passed to HW and just handled by PMD to
> provide correct IOVA addresses to put data to? If so, it is an implementation
> detail which is specific to mlx5. If so, no specific limitations except data room,
> size and offset consistency.
> But it could be passed to a HW and it could be, for example, just 8 bits for the
> value.

Yes, it could. But there should be other vendors be involved, not known for now
who is going to support BUFFER_SPLIT and in which way. We should not invent
some theoretical limitations and merge the dead code. And, please note -
Tx segmentation has been living for 10 years successfully without any limitations,
no one cares about, there is no any request to report. Likewise is expected for Rx.

> 
> >
> >>  4. Offset alignment
> >>  5. Maximum/minimum length of a segment  6. Length alignment
> > In which form? Mask of lsbs ? 0 means no limitations ?
> 
> log2, i.e. 0 => 1 (no limitations) 1 => 2 (even only),
> 6 => 64 (64-byte cache line aligned) etc.
> 

Yes, possible option.
> >
> >>
> >> I realize that 3, 4 and 5 could be per segment number.
> >> If it is really that complex, report common denominator which is
> >> guaranteed to work. If we have no checks on ethdev layer, application
> >> can ignore it if it knows better
> >
> > Currently it is not clear at all what kind of limitations should be
> > reported, we could include all of mentioned/proposed ones, and no one
> > will report there -
> > mlx5 has no any reasonable limitations to report for now.
> >
> > Should we reserve some pointer field in the rte_eth_dev_info to report
> > the limitations? (Limitation description should contain variable size
> > array, depending on the number of segments, so pointer seems to be
> appropriate).
> > It would allow us to avoid ABI break, and present the limitation structure
> once it is defined.
> 
> I will let other ethdev maintainers to make a decision here.
> My vote would be to report limitations mentioned above.
> It looks like Jerin is also interested in limitations reporting. Not sure if my form
> looks OK or no.

For now I tend to think we could reserve some pointer for BUFFER_SPLIT limitations and that's it.
Reporting some silly generic limitations from mlx5 means introducing the dead code in my opinion.
If we'll see the actual request from applications to check and handle limitations (actually applications
are very limited in this matter - they expect the split point to be set at very strong defined place
of the packet format).




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2
  2020-10-15 14:26  7%   ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Jerin Jacob
@ 2020-10-15 14:38  4%     ` McDaniel, Timothy
  0 siblings, 0 replies; 200+ results
From: McDaniel, Timothy @ 2020-10-15 14:38 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Carrillo, Erik G, Eads, Gage, Van Haaren, Harry,
	Hemant Agrawal



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, October 15, 2020 9:26 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Eads,
> Gage <gage.eads@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2
> 
> On Thu, Oct 15, 2020 at 3:04 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > This series implements the eventdev ABI changes required by
> > the DLB and DLB2 PMDs. This ABI change was announced in the
> > 20.08 release notes [1]. This patch was initially part of
> > the V1 DLB PMD patchset.
> 
> Hi @McDaniel, Timothy ,
> 
> Following things missing in this patch set before it needs to merge:
> - Update doc/guides/rel_notes/release_20_11.rst for "API Changes"
> and/or "ABI Changes" section
> - Update doc/guides/rel_notes/deprecation.rst to remove the this patch
> specific depreciation note
> - Merge patch 1 and 2 to a single patch it has a compilation error if
> we build patch1 alone
> - Update the git commit to give more data on the combined patch.
> - rebase the patch to http://browse.dpdk.org/next/dpdk-next-eventdev/,
> it still git-am apply issues.
> 
> After fixing the above, I will merge this RC1. Please send ASAP.
> 

I will get on this straight away. Thanks.

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2
  2020-10-14 21:36  9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
  2020-10-14 21:36  2%   ` [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
  2020-10-14 21:36  6%   ` [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
@ 2020-10-15 14:26  7%   ` Jerin Jacob
  2020-10-15 14:38  4%     ` McDaniel, Timothy
  2 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-15 14:26 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
	Hemant Agrawal

On Thu, Oct 15, 2020 at 3:04 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This series implements the eventdev ABI changes required by
> the DLB and DLB2 PMDs. This ABI change was announced in the
> 20.08 release notes [1]. This patch was initially part of
> the V1 DLB PMD patchset.

Hi @McDaniel, Timothy ,

Following things missing in this patch set before it needs to merge:
- Update doc/guides/rel_notes/release_20_11.rst for "API Changes"
and/or "ABI Changes" section
- Update doc/guides/rel_notes/deprecation.rst to remove the this patch
specific depreciation note
- Merge patch 1 and 2 to a single patch it has a compilation error if
we build patch1 alone
- Update the git commit to give more data on the combined patch.
- rebase the patch to http://browse.dpdk.org/next/dpdk-next-eventdev/,
it still git-am apply issues.

After fixing the above, I will merge this RC1. Please send ASAP.



>
> The DLB hardware does not conform exactly to the eventdev interface.
> 1) It has a limit on the number of queues that may be linked to a port.
> 2) Some ports are further restricted to a maximum of 1 linked queue.
> 3) It does not (currently) have the ability to carry the flow_id as part
> of the event (QE) payload.
>
> Due to the above, we would like to propose the following enhancements.
>
> 1) Add new fields to the rte_event_dev_info struct. These fields allow
> the device to advertise its capabilities so that applications can take
> the appropriate actions based on those capabilities.
>
> 2) Add a new field to the rte_event_dev_config struct. This field allows
> the application to specify how many of its ports are limited to a single
> link, or will be used in single link mode.
>
> 3) Replace the dedicated implicit_release_disabled field with a bit field
> of explicit port capabilities. The implicit_release_disable functionality
> is assigned to one bit, and a port-is-single-link-only attribute is
> assigned to another, with the remaining bits available for future
> assignment.
>
> Note that it was requested that we split this app/test
> changes out from the eventdev ABI patch. As a result,
> neither of these patches will build without the other
> also being applied.
>
> Major changes since V1:
> Reworded commit message, as requested
> Fixed errors reported by clang
>
> Testing showed no performance impact due to the flow_id template code
> added to test app.
>
> [1] http://mails.dpdk.org/archives/dev/2020-August/177261.html
>
>
> Timothy McDaniel (2):
>   eventdev: eventdev: express DLB/DLB2 PMD constraints
>   eventdev: update app and examples for new eventdev ABI
>
>
>
> Timothy McDaniel (2):
>   eventdev: eventdev: express DLB/DLB2 PMD constraints
>   eventdev: update app and examples for new eventdev ABI
>
>  app/test-eventdev/evt_common.h                     | 11 ++++
>  app/test-eventdev/test_order_atq.c                 | 28 ++++++---
>  app/test-eventdev/test_order_common.c              |  1 +
>  app/test-eventdev/test_order_queue.c               | 29 +++++++---
>  app/test/test_eventdev.c                           |  4 +-
>  drivers/event/dpaa/dpaa_eventdev.c                 |  3 +-
>  drivers/event/dpaa2/dpaa2_eventdev.c               |  5 +-
>  drivers/event/dsw/dsw_evdev.c                      |  3 +-
>  drivers/event/octeontx/ssovf_evdev.c               |  5 +-
>  drivers/event/octeontx2/otx2_evdev.c               |  3 +-
>  drivers/event/opdl/opdl_evdev.c                    |  3 +-
>  drivers/event/skeleton/skeleton_eventdev.c         |  5 +-
>  drivers/event/sw/sw_evdev.c                        |  8 ++-
>  drivers/event/sw/sw_evdev_selftest.c               |  6 +-
>  .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +-
>  examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
>  examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++-
>  examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +-
>  examples/l3fwd/l3fwd_event_generic.c               |  7 ++-
>  examples/l3fwd/l3fwd_event_internal_port.c         |  6 +-
>  lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
>  lib/librte_eventdev/rte_eventdev.c                 | 66 +++++++++++++++++++---
>  lib/librte_eventdev/rte_eventdev.h                 | 51 ++++++++++++++---
>  lib/librte_eventdev/rte_eventdev_pmd_pci.h         |  1 -
>  lib/librte_eventdev/rte_eventdev_trace.h           |  7 ++-
>  lib/librte_eventdev/rte_eventdev_version.map       |  4 +-
>  26 files changed, 214 insertions(+), 64 deletions(-)
>
> --
> 2.6.4
>

^ permalink raw reply	[relevance 7%]

* Re: [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines
  2020-10-15 13:05  3%         ` Luca Boccassi
@ 2020-10-15 14:03  3%           ` Bruce Richardson
  2020-10-15 15:32  0%             ` Luca Boccassi
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2020-10-15 14:03 UTC (permalink / raw)
  To: Luca Boccassi; +Cc: dev, david.marchand, arybchenko, ferruh.yigit, thomas

On Thu, Oct 15, 2020 at 02:05:37PM +0100, Luca Boccassi wrote:
> On Thu, 2020-10-15 at 12:18 +0100, Bruce Richardson wrote:
> > On Thu, Oct 15, 2020 at 11:30:29AM +0100, Luca Boccassi wrote:
> > > On Wed, 2020-10-14 at 15:13 +0100, Bruce Richardson wrote:
> > > > As discussed on the dpdk-dev mailing list[1], we can make some easy
> > > > improvements in standardizing the naming of the various components in DPDK,
> > > > and their associated feature-enabled macros.
> > > > 
> > > > Following this patch, each library will have the name in format,
> > > > 'librte_<name>.so', and the macro indicating that library is enabled in the
> > > > build will have the form 'RTE_LIB_<NAME>'.
> > > > 
> > > > Similarly, for libraries, the equivalent name formats and macros are:
> > > > 'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
> > > > device type taken from the relevant driver subdirectory name, i.e. 'net',
> > > > 'crypto' etc.
> > > > 
> > > > To avoid too many changes at once for end applications, the old macro names
> > > > will still be provided in the build in this release, but will be removed
> > > > subsequently.
> > > > 
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > 
> > > > [1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
> > > > ---
> > > >  app/test-bbdev/meson.build            |  4 ++--
> > > >  app/test-crypto-perf/meson.build      |  2 +-
> > > >  app/test-pmd/meson.build              | 12 ++++++------
> > > >  app/test/meson.build                  |  8 ++++----
> > > >  doc/guides/rel_notes/deprecation.rst  |  8 ++++++++
> > > >  drivers/baseband/meson.build          |  1 -
> > > >  drivers/bus/meson.build               |  1 -
> > > >  drivers/common/meson.build            |  1 -
> > > >  drivers/common/mlx5/meson.build       |  1 -
> > > >  drivers/common/qat/meson.build        |  1 -
> > > >  drivers/compress/meson.build          |  1 -
> > > >  drivers/compress/octeontx/meson.build |  2 +-
> > > >  drivers/crypto/meson.build            |  1 -
> > > >  drivers/crypto/null/meson.build       |  2 +-
> > > >  drivers/crypto/octeontx/meson.build   |  2 +-
> > > >  drivers/crypto/octeontx2/meson.build  |  2 +-
> > > >  drivers/crypto/scheduler/meson.build  |  2 +-
> > > >  drivers/crypto/virtio/meson.build     |  2 +-
> > > >  drivers/event/dpaa/meson.build        |  2 +-
> > > >  drivers/event/dpaa2/meson.build       |  2 +-
> > > >  drivers/event/meson.build             |  1 -
> > > >  drivers/event/octeontx/meson.build    |  2 +-
> > > >  drivers/event/octeontx2/meson.build   |  2 +-
> > > >  drivers/mempool/meson.build           |  1 -
> > > >  drivers/meson.build                   |  9 ++++-----
> > > >  drivers/net/meson.build               |  1 -
> > > >  drivers/net/mlx4/meson.build          |  2 +-
> > > >  drivers/raw/ifpga/meson.build         |  2 +-
> > > >  drivers/raw/meson.build               |  1 -
> > > >  drivers/regex/meson.build             |  1 -
> > > >  drivers/vdpa/meson.build              |  1 -
> > > >  examples/bond/meson.build             |  2 +-
> > > >  examples/ethtool/meson.build          |  2 +-
> > > >  examples/ioat/meson.build             |  2 +-
> > > >  examples/l2fwd-crypto/meson.build     |  2 +-
> > > >  examples/ntb/meson.build              |  2 +-
> > > >  examples/vm_power_manager/meson.build |  6 +++---
> > > >  lib/librte_ethdev/meson.build         |  1 -
> > > >  lib/librte_graph/meson.build          |  2 --
> > > >  lib/meson.build                       |  3 ++-
> > > >  40 files changed, 47 insertions(+), 55 deletions(-)
> > > 
> > > Does this change the share object file names too, or only the macros?
> > > 
> > 
> > It does indeed change the object name files, which is a little bit
> > concerning. However, the consensus based on the RFC seemed to be that the
> > benefit is likely worth the change. If we want, we can look to use symlinks
> > to the old names on install, but I think that just delays the pain since I
> > would expect few to actually change their build to the new names until the
> > old ones and the symlinks completely go away.
> > 
> > /Bruce
> 
> It is a backward incompatible change, so we need to provide symlinks,
> right? On upgrade, programs linked to librte_old.so will fail to start.
> Or was this targeted at 20.11 thus piggy-backing on the ABI change
> which forces a re-link?
> 
More of the latter, and the fact that changing the build system involved a
few library renames anyway for those using make. Since the ABI is changing
this release, and all the libs have a new major version number there is no
requirement for libs linked against an older version to work, and since
pkg-config should now be used for linking the actual names should not be
a concern.

That's the thinking anyway. :-)

/Bruce

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 13:07  0%                       ` Andrew Rybchenko
@ 2020-10-15 13:57  0%                         ` Slava Ovsiienko
  2020-10-15 20:22  0%                         ` Slava Ovsiienko
  1 sibling, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-10-15 13:57 UTC (permalink / raw)
  To: Andrew Rybchenko, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
	Jerin Jacob, Andrew Rybchenko
  Cc: dpdk-dev, Stephen Hemminger, Olivier Matz, Maxime Coquelin,
	David Marchand

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, October 15, 2020 16:07
> To: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Jerin Jacob <jerinjacobk@gmail.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Cc: dpdk-dev <dev@dpdk.org>; Stephen Hemminger
> <stephen@networkplumber.org>; Olivier Matz <olivier.matz@6wind.com>;
> Maxime Coquelin <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>
> Subject: Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
> 
> On 10/15/20 3:49 PM, Thomas Monjalon wrote:
> > 15/10/2020 13:49, Slava Ovsiienko:
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>> On 10/15/2020 12:26 PM, Jerin Jacob wrote:
> >>>
> >>> <...>
> >>>
> >>>>>>>> If we see some of the features of such kind or other PMDs
> >>>>>>>> adopts the split feature - we'll try to find the common root
> >>>>>>>> and consider the way how
> >>>>>> to report it.
> >>>>>>>
> >>>>>>> My only concern with that approach will be ABI break again if
> >>>>>>> something needs to exposed over rte_eth_dev_info().
> >>>>>
> >>>>> Let's reserve the pointer to struct rte_eth_rxseg_limitations in
> >>>>> the rte_eth_dev_info to avoid ABI break?
> >>>>
> >>>> Works for me. If we add an additional reserved field.
> >>>>
> >>>> Due to RC1 time constraint, I am OK to leave it as a reserved filed
> >>>> and fill meat when it is required if other ethdev maintainers are OK.
> >>>> I will be required for feature complete.
> >>>>
> >>>
> >>> Sounds good to me.
> >
> > OK for me.
> 
> OK as well, but I dislike the idea with pointer in dev_info.
> It sounds like it breaks existing practice.

Moreover, if we are going to have multiple features using Rx segmentation
we should provide multiple structures with limitations - at least, one  per feature.

> We should either reserve enough space or simply add dedicated API call to
> report Rx seg capabilities.
> 
It seems we are trying to embrace everything in very generic way 😊
Just curious - how did we managed to survive without limitations on Tx direction?
No one told us how many segments PMD supports on Tx, what is the limitations
for offsets and alignments, it seems there is no limits for tx segment size at all.
How could it happen? Tx limitations do not exist? Just no one cared about the Tx limitations?

As for Rx limitations - there are no reasonable ones for now. We'll invent
the way to report the limitations (and it seems to be unbalanced - we should provide
the same to Tx), the next step is to provide at least one PMD using that,
and in this way to make mlx5 PMD to report silly values - "I have no reasonable
limitations beyond meaningful buffer size under pool_buf_size/UINT16_MAX)".

IMO, If some HW does not support arbitrary split (suppose, it is not common case,
most of HW is very flexible about specifying Rx buffers) the BUFFER_SPLIT feature
should not be advertised at all, because it would be not very useful - application
is intended to work over specific protocol, it knows where it wants to set split point
(defined by packet format). Hence,  application is not so interested about offsets,
alignments, etc - it just checks whether PMD provides requested split points or not.

That's why just simple documenting was initially intended, there are just no
a lot of limitations expected, likewise Tx direction shows that.

Yes, generally speaking, there are no doubts it would be nice to report the limitations, but:
- not expected to have many (documenting can the few exceptions)
- no nice way is found how to report - pointer? API?
- complicated to present for various features (variable size array, multiple features)
- not known which limitations are actually needed, just some theoretical ones

So, we see the large white area, should we invent something not well-defined to cover one,
or let's wait for actual request to check limitations that can't be handled by documenting
and internal PMD checking/validation?

With best regards, Slava


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
  @ 2020-10-15 13:30  4%   ` Andrew Rybchenko
  2020-10-16  9:22  0%     ` Ferruh Yigit
  2020-10-16 11:20  3%     ` Kinsella, Ray
  0 siblings, 2 replies; 200+ results
From: Andrew Rybchenko @ 2020-10-15 13:30 UTC (permalink / raw)
  To: Ray Kinsella, Neil Horman, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko
  Cc: dev, Ivan Ilchenko

From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>

Change rte_eth_dev_stop() return value from void to int
and return negative errno values in case of error conditions.
Also update the usage of the function in ethdev according to
the new return type.

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
 doc/guides/rel_notes/deprecation.rst   |  1 -
 doc/guides/rel_notes/release_20_11.rst |  3 +++
 lib/librte_ethdev/rte_ethdev.c         | 27 +++++++++++++++++++-------
 lib/librte_ethdev/rte_ethdev.h         |  5 ++++-
 4 files changed, 27 insertions(+), 9 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index d1f5ed39db..2e04e24374 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -127,7 +127,6 @@ Deprecation Notices
   negative errno values to indicate various error conditions (e.g.
   invalid port ID, unsupported operation, failed operation):
 
-  - ``rte_eth_dev_stop``
   - ``rte_eth_dev_close``
 
 * ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index f8686a50db..c8c30937fa 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -355,6 +355,9 @@ API Changes
 * vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
   instead of ``rte_vhost_driver_start`` by crypto applications.
 
+* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
+  ``int`` to provide a way to report various error conditions.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index d9b82df073..b8cf04ef4d 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 	int diag;
-	int ret;
+	int ret, ret_stop;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
@@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
 		RTE_ETHDEV_LOG(ERR,
 			"Error during restoring configuration for device (port %u): %s\n",
 			port_id, rte_strerror(-ret));
-		rte_eth_dev_stop(port_id);
+		ret_stop = rte_eth_dev_stop(port_id);
+		if (ret_stop != 0) {
+			RTE_ETHDEV_LOG(ERR,
+				"Failed to stop device (port %u): %s\n",
+				port_id, rte_strerror(-ret_stop));
+		}
+
 		return ret;
 	}
 
@@ -1708,26 +1714,28 @@ rte_eth_dev_start(uint16_t port_id)
 	return 0;
 }
 
-void
+int
 rte_eth_dev_stop(uint16_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	RTE_ETH_VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_stop, -ENOTSUP);
 
 	if (dev->data->dev_started == 0) {
 		RTE_ETHDEV_LOG(INFO,
 			"Device with port_id=%"PRIu16" already stopped\n",
 			port_id);
-		return;
+		return 0;
 	}
 
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id);
+
+	return 0;
 }
 
 int
@@ -1783,7 +1791,12 @@ rte_eth_dev_reset(uint16_t port_id)
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
 
-	rte_eth_dev_stop(port_id);
+	ret = rte_eth_dev_stop(port_id);
+	if (ret != 0) {
+		RTE_ETHDEV_LOG(ERR,
+			"Failed to stop device (port %u) before reset: %s - ignore\n",
+			port_id, rte_strerror(-ret));
+	}
 	ret = dev->dev_ops->dev_reset(dev);
 
 	return eth_err(port_id, ret);
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index a61ca115a0..b85861cf2b 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -2277,8 +2277,11 @@ int rte_eth_dev_start(uint16_t port_id);
  *
  * @param port_id
  *   The port identifier of the Ethernet device.
+ * @return
+ *   - 0: Success, Ethernet device stopped.
+ *   - <0: Error code of the driver device stop function.
  */
-void rte_eth_dev_stop(uint16_t port_id);
+int rte_eth_dev_stop(uint16_t port_id);
 
 /**
  * Link up an Ethernet device.
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v6 2/5] ethdev: add new attributes to hairpin config
  @ 2020-10-15 13:08  4%   ` Bing Zhao
  0 siblings, 0 replies; 200+ results
From: Bing Zhao @ 2020-10-15 13:08 UTC (permalink / raw)
  To: thomas, orika, ferruh.yigit, arybchenko, mdr, nhorman,
	bernard.iremonger, beilei.xing, wenzhuo.lu
  Cc: dev

To support two ports hairpin mode and keep the backward compatibility
for the application, two new attribute members of the hairpin queue
configuration structure will be added.

`tx_explicit` means if the application itself will insert the Tx part
flow rules. If not set, PMD will insert the rules implicitly.
`manual_bind` means if the hairpin Tx queue and peer Rx queue will be
bound automatically during the device start stage.

Different Tx and Rx queue pairs could have different values, but it
is highly recommended that all paired queues between one egress and
its peer ingress ports have the same values, in order not to bring
any chaos to the system. The actual support of these attribute
parameters will be checked and decided by the PMD drivers.

In the single port hairpin, if both are zero without any setting, the
behavior will remain the same as before. It means that no bind API
needs to be called and no Tx flow rules need to be inserted manually
by the application.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
v6: Using unnecessary comment and using "Rx" & "Tx"
v4: squash document update and more info for the two new attributes
v2: optimize the structure and remove unused macros
---
 doc/guides/prog_guide/rte_flow.rst     |  3 +++
 doc/guides/rel_notes/release_20_11.rst |  7 +++++++
 lib/librte_ethdev/rte_ethdev.c         |  8 ++++----
 lib/librte_ethdev/rte_ethdev.h         | 27 ++++++++++++++++++++++++++-
 4 files changed, 40 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 55497c9..3df005a 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2618,6 +2618,9 @@ set, unpredictable value will be seen depending on driver implementation. For
 loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to
 the other path depending on HW capability.
 
+In hairpin case with Tx explicit flow mode, metadata could (not mandatory) be
+used to connect the Rx and Tx flows if it can be propagated from Rx to Tx path.
+
 .. _table_rte_flow_action_set_meta:
 
 .. table:: SET_META
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 02bf7ca..2f23e6f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -92,6 +92,7 @@ New Features
 * **Updated the ethdev library to support hairpin between two ports.**
 
   New APIs are introduced to support binding / unbinding 2 ports hairpin.
+  Hairpin Tx part flow rules can be inserted explicitly.
 
 * **Updated Broadcom bnxt driver.**
 
@@ -396,6 +397,12 @@ ABI Changes
     Applications should use the new values for identification of existing
     extensions in the packet header.
 
+  * ``struct rte_eth_hairpin_conf`` has two new members:
+
+    * ``uint32_t tx_explicit:1;``
+    * ``uint32_t manual_bind:1;``
+
+
 Known Issues
 ------------
 
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 57cf4a7..bcbee30 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -2003,13 +2003,13 @@ struct rte_eth_dev *
 	}
 	if (conf->peer_count > cap.max_rx_2_tx) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Rx queue(=%hu), should be: <= %hu",
+			"Invalid value for number of peers for Rx queue(=%u), should be: <= %hu",
 			conf->peer_count, cap.max_rx_2_tx);
 		return -EINVAL;
 	}
 	if (conf->peer_count == 0) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Rx queue(=%hu), should be: > 0",
+			"Invalid value for number of peers for Rx queue(=%u), should be: > 0",
 			conf->peer_count);
 		return -EINVAL;
 	}
@@ -2174,13 +2174,13 @@ struct rte_eth_dev *
 	}
 	if (conf->peer_count > cap.max_tx_2_rx) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Tx queue(=%hu), should be: <= %hu",
+			"Invalid value for number of peers for Tx queue(=%u), should be: <= %hu",
 			conf->peer_count, cap.max_tx_2_rx);
 		return -EINVAL;
 	}
 	if (conf->peer_count == 0) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Tx queue(=%hu), should be: > 0",
+			"Invalid value for number of peers for Tx queue(=%u), should be: > 0",
 			conf->peer_count);
 		return -EINVAL;
 	}
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 10eb626..a8e5cdc 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1045,7 +1045,32 @@ struct rte_eth_hairpin_peer {
  * A structure used to configure hairpin binding.
  */
 struct rte_eth_hairpin_conf {
-	uint16_t peer_count; /**< The number of peers. */
+	uint32_t peer_count:16; /**< The number of peers. */
+
+	/**
+	 * Explicit Tx flow rule mode.
+	 * One hairpin pair of queues should have the same attribute.
+	 *
+	 * - When set, the user should be responsible for inserting the hairpin
+	 *   Tx part flows and removing them.
+	 * - When clear, the PMD will try to handle the Tx part of the flows,
+	 *   e.g., by splitting one flow into two parts.
+	 */
+	uint32_t tx_explicit:1;
+
+	/**
+	 * Manually bind hairpin queues.
+	 * One hairpin pair of queues should have the same attribute.
+	 *
+	 * - When set, to enable hairpin, the user should call the hairpin bind
+	 *   function after all the queues are set up properly and the ports are
+	 *   started. Also, the hairpin unbind function should be called
+	 *   accordingly before stopping a port that with hairpin configured.
+	 * - When clear, the PMD will try to enable the hairpin with the queues
+	 *   configured automatically during port start.
+	 */
+	uint32_t manual_bind:1;
+	uint32_t reserved:14; /**< Reserved bits. */
 	struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS];
 };
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 12:49  0%                     ` Thomas Monjalon
@ 2020-10-15 13:07  0%                       ` Andrew Rybchenko
  2020-10-15 13:57  0%                         ` Slava Ovsiienko
  2020-10-15 20:22  0%                         ` Slava Ovsiienko
  0 siblings, 2 replies; 200+ results
From: Andrew Rybchenko @ 2020-10-15 13:07 UTC (permalink / raw)
  To: Thomas Monjalon, Ferruh Yigit, Jerin Jacob, Slava Ovsiienko,
	Andrew Rybchenko
  Cc: dpdk-dev, Stephen Hemminger, Olivier Matz, Maxime Coquelin,
	David Marchand

On 10/15/20 3:49 PM, Thomas Monjalon wrote:
> 15/10/2020 13:49, Slava Ovsiienko:
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>> On 10/15/2020 12:26 PM, Jerin Jacob wrote:
>>>
>>> <...>
>>>
>>>>>>>> If we see some of the features of such kind or other PMDs adopts
>>>>>>>> the split feature - we'll try to find the common root and consider
>>>>>>>> the way how
>>>>>> to report it.
>>>>>>>
>>>>>>> My only concern with that approach will be ABI break again if
>>>>>>> something needs to exposed over rte_eth_dev_info().
>>>>>
>>>>> Let's reserve the pointer to struct rte_eth_rxseg_limitations in the
>>>>> rte_eth_dev_info to avoid ABI break?
>>>>
>>>> Works for me. If we add an additional reserved field.
>>>>
>>>> Due to RC1 time constraint, I am OK to leave it as a reserved filed
>>>> and fill meat when it is required if other ethdev maintainers are OK.
>>>> I will be required for feature complete.
>>>>
>>>
>>> Sounds good to me.
> 
> OK for me.

OK as well, but I dislike the idea with pointer in dev_info.
It sounds like it breaks existing practice.
We should either reserve enough space or simply add
dedicated API call to report Rx seg capabilities.

> 
>> OK, let's introduce the pointer in the rte_eth_dev_info and 
>> define struct rte_eth_rxseg_limitations as experimental.
>> Will it be allowed to update this one later (after 20.11)? 
>> Is ABI break is allowed for the case?
> 
> If it is experimental, you can change it at anytime.
> 
> Ideally, we could try to have a first version of the limitations
> during 20.11-rc2.

Yes, please.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines
  @ 2020-10-15 13:05  3%         ` Luca Boccassi
  2020-10-15 14:03  3%           ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Luca Boccassi @ 2020-10-15 13:05 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, david.marchand, arybchenko, ferruh.yigit, thomas

On Thu, 2020-10-15 at 12:18 +0100, Bruce Richardson wrote:
> On Thu, Oct 15, 2020 at 11:30:29AM +0100, Luca Boccassi wrote:
> > On Wed, 2020-10-14 at 15:13 +0100, Bruce Richardson wrote:
> > > As discussed on the dpdk-dev mailing list[1], we can make some easy
> > > improvements in standardizing the naming of the various components in DPDK,
> > > and their associated feature-enabled macros.
> > > 
> > > Following this patch, each library will have the name in format,
> > > 'librte_<name>.so', and the macro indicating that library is enabled in the
> > > build will have the form 'RTE_LIB_<NAME>'.
> > > 
> > > Similarly, for libraries, the equivalent name formats and macros are:
> > > 'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
> > > device type taken from the relevant driver subdirectory name, i.e. 'net',
> > > 'crypto' etc.
> > > 
> > > To avoid too many changes at once for end applications, the old macro names
> > > will still be provided in the build in this release, but will be removed
> > > subsequently.
> > > 
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > 
> > > [1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
> > > ---
> > >  app/test-bbdev/meson.build            |  4 ++--
> > >  app/test-crypto-perf/meson.build      |  2 +-
> > >  app/test-pmd/meson.build              | 12 ++++++------
> > >  app/test/meson.build                  |  8 ++++----
> > >  doc/guides/rel_notes/deprecation.rst  |  8 ++++++++
> > >  drivers/baseband/meson.build          |  1 -
> > >  drivers/bus/meson.build               |  1 -
> > >  drivers/common/meson.build            |  1 -
> > >  drivers/common/mlx5/meson.build       |  1 -
> > >  drivers/common/qat/meson.build        |  1 -
> > >  drivers/compress/meson.build          |  1 -
> > >  drivers/compress/octeontx/meson.build |  2 +-
> > >  drivers/crypto/meson.build            |  1 -
> > >  drivers/crypto/null/meson.build       |  2 +-
> > >  drivers/crypto/octeontx/meson.build   |  2 +-
> > >  drivers/crypto/octeontx2/meson.build  |  2 +-
> > >  drivers/crypto/scheduler/meson.build  |  2 +-
> > >  drivers/crypto/virtio/meson.build     |  2 +-
> > >  drivers/event/dpaa/meson.build        |  2 +-
> > >  drivers/event/dpaa2/meson.build       |  2 +-
> > >  drivers/event/meson.build             |  1 -
> > >  drivers/event/octeontx/meson.build    |  2 +-
> > >  drivers/event/octeontx2/meson.build   |  2 +-
> > >  drivers/mempool/meson.build           |  1 -
> > >  drivers/meson.build                   |  9 ++++-----
> > >  drivers/net/meson.build               |  1 -
> > >  drivers/net/mlx4/meson.build          |  2 +-
> > >  drivers/raw/ifpga/meson.build         |  2 +-
> > >  drivers/raw/meson.build               |  1 -
> > >  drivers/regex/meson.build             |  1 -
> > >  drivers/vdpa/meson.build              |  1 -
> > >  examples/bond/meson.build             |  2 +-
> > >  examples/ethtool/meson.build          |  2 +-
> > >  examples/ioat/meson.build             |  2 +-
> > >  examples/l2fwd-crypto/meson.build     |  2 +-
> > >  examples/ntb/meson.build              |  2 +-
> > >  examples/vm_power_manager/meson.build |  6 +++---
> > >  lib/librte_ethdev/meson.build         |  1 -
> > >  lib/librte_graph/meson.build          |  2 --
> > >  lib/meson.build                       |  3 ++-
> > >  40 files changed, 47 insertions(+), 55 deletions(-)
> > 
> > Does this change the share object file names too, or only the macros?
> > 
> 
> It does indeed change the object name files, which is a little bit
> concerning. However, the consensus based on the RFC seemed to be that the
> benefit is likely worth the change. If we want, we can look to use symlinks
> to the old names on install, but I think that just delays the pain since I
> would expect few to actually change their build to the new names until the
> old ones and the symlinks completely go away.
> 
> /Bruce

It is a backward incompatible change, so we need to provide symlinks,
right? On upgrade, programs linked to librte_old.so will fail to start.
Or was this targeted at 20.11 thus piggy-backing on the ABI change
which forces a re-link?

-- 
Kind regards,
Luca Boccassi

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 11:49  3%                   ` Slava Ovsiienko
@ 2020-10-15 12:49  0%                     ` Thomas Monjalon
  2020-10-15 13:07  0%                       ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-10-15 12:49 UTC (permalink / raw)
  To: Ferruh Yigit, Jerin Jacob, Slava Ovsiienko, Andrew Rybchenko
  Cc: dpdk-dev, Stephen Hemminger, Olivier Matz, Maxime Coquelin,
	David Marchand

15/10/2020 13:49, Slava Ovsiienko:
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> > On 10/15/2020 12:26 PM, Jerin Jacob wrote:
> > 
> > <...>
> > 
> > >>>>> If we see some of the features of such kind or other PMDs adopts
> > >>>>> the split feature - we'll try to find the common root and consider
> > >>>>> the way how
> > >>> to report it.
> > >>>>
> > >>>> My only concern with that approach will be ABI break again if
> > >>>> something needs to exposed over rte_eth_dev_info().
> > >>
> > >> Let's reserve the pointer to struct rte_eth_rxseg_limitations in the
> > >> rte_eth_dev_info to avoid ABI break?
> > >
> > > Works for me. If we add an additional reserved field.
> > >
> > > Due to RC1 time constraint, I am OK to leave it as a reserved filed
> > > and fill meat when it is required if other ethdev maintainers are OK.
> > > I will be required for feature complete.
> > >
> > 
> > Sounds good to me.

OK for me.

> OK, let's introduce the pointer in the rte_eth_dev_info and 
> define struct rte_eth_rxseg_limitations as experimental.
> Will it be allowed to update this one later (after 20.11)? 
> Is ABI break is allowed for the case?

If it is experimental, you can change it at anytime.

Ideally, we could try to have a first version of the limitations
during 20.11-rc2.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 11:36  0%                 ` Ferruh Yigit
@ 2020-10-15 11:49  3%                   ` Slava Ovsiienko
  2020-10-15 12:49  0%                     ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2020-10-15 11:49 UTC (permalink / raw)
  To: Ferruh Yigit, Jerin Jacob
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Olivier Matz, Maxime Coquelin, David Marchand, Andrew Rybchenko

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, October 15, 2020 14:37
> To: Jerin Jacob <jerinjacobk@gmail.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>
> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Stephen Hemminger
> <stephen@networkplumber.org>; Olivier Matz <olivier.matz@6wind.com>;
> Maxime Coquelin <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> 
> On 10/15/2020 12:26 PM, Jerin Jacob wrote:
> 
> <...>
> 
> >>>>> If we see some of the features of such kind or other PMDs adopts
> >>>>> the split feature - we'll try to find the common root and consider
> >>>>> the way how
> >>> to report it.
> >>>>
> >>>> My only concern with that approach will be ABI break again if
> >>>> something needs to exposed over rte_eth_dev_info().
> >>
> >> Let's reserve the pointer to struct rte_eth_rxseg_limitations in the
> >> rte_eth_dev_info to avoid ABI break?
> >
> > Works for me. If we add an additional reserved field.
> >
> > Due to RC1 time constraint, I am OK to leave it as a reserved filed
> > and fill meat when it is required if other ethdev maintainers are OK.
> > I will be required for feature complete.
> >
> 
> Sounds good to me.

OK, let's introduce the pointer in the rte_eth_dev_info and 
define struct rte_eth_rxseg_limitations as experimental.
Will it be allowed to update this one later (after 20.11)? 
Is ABI break is allowed for the case?

With best regards, Slava


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 11:26  0%               ` Jerin Jacob
@ 2020-10-15 11:36  0%                 ` Ferruh Yigit
  2020-10-15 11:49  3%                   ` Slava Ovsiienko
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-10-15 11:36 UTC (permalink / raw)
  To: Jerin Jacob, Slava Ovsiienko
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Olivier Matz, Maxime Coquelin, David Marchand, Andrew Rybchenko

On 10/15/2020 12:26 PM, Jerin Jacob wrote:

<...>

>>>>> If we see some of the features of such kind or other PMDs adopts the
>>>>> split feature - we'll try to find the common root and consider the way how
>>> to report it.
>>>>
>>>> My only concern with that approach will be ABI break again if
>>>> something needs to exposed over rte_eth_dev_info().
>>
>> Let's reserve the pointer to struct rte_eth_rxseg_limitations
>> in the rte_eth_dev_info to avoid ABI break?
> 
> Works for me. If we add an additional reserved field.
> 
> Due to RC1 time constraint, I am OK to leave it as a reserved filed and fill
> meat when it is required if other ethdev maintainers are OK.
> I will be required for feature complete.
> 

Sounds good to me.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 10:51  3%             ` Slava Ovsiienko
@ 2020-10-15 11:26  0%               ` Jerin Jacob
  2020-10-15 11:36  0%                 ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-15 11:26 UTC (permalink / raw)
  To: Slava Ovsiienko
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
	Andrew Rybchenko

On Thu, Oct 15, 2020 at 4:21 PM Slava Ovsiienko <viacheslavo@nvidia.com> wrote:
>
> Hi, Jerin
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, October 15, 2020 13:28
> > To: Slava Ovsiienko <viacheslavo@nvidia.com>
> > Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> > <thomas@monjalon.net>; Stephen Hemminger
> > <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> > <maxime.coquelin@redhat.com>; David Marchand
> > <david.marchand@redhat.com>; Andrew Rybchenko
> > <arybchenko@solarflare.com>
> > Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> >
> [..snip..]
> >
> > struct rte_eth_rxseg {
> >     enum rte_eth_rxseg_mode mode ;
> >     union {
> >                struct rte_eth_rxseg_mode xxx {
> >                               struct rte_mempool *mp; /**< Memory pool to allocate
> > segment from. */
> >                               uint16_t length; /**< Segment data length, configures split
> > point. */
> >                                uint16_t offset; /**< Data offset from beginning of mbuf data
> > buffer. */
> >                                uint32_t reserved; /**< Reserved field. */
> >              }
> > }
>
> There is an array of rte_eth_rxseg. It would introduce multiple "enum rte_eth_rxseg_mode mode"
> and would cause some ambiguity. About mode selection - please, see below.
> Union seems to be good idea, let's adopt.

Ack. Let's take the only union concept.

>
> >
> > Another mode, Marvell PMD has it(I believe Intel also) i.e When we say:
> >
> > seg0 - pool0, len0=2000B, off0=0
> > seg1 - pool1, len1=2001B, off1=0
> >
> > packet size up to, 2000B goes to pool 0 and if is >=2001 goes to pool1.
> > I think, it is better to have mode param in rte_eth_rxseg for avoiding ABI
> > changes.(Just  like clean rte_flow APIs)
>
> It is supposed to choose with RTE_ETH_RX_OFFLOAD_xxx flags.
> For packet sorting it should be something like this RTE_ETH_RX_OFFLOAD_SORT.
> PMD reports it supports the feature, the flag is set in rx_conf->offloads
> and rxseg structure is interpreted according to these flags.

Works for me.

>
> Please, note, there is intentionally no check for RTE_ETH_RX_OFFLOAD_xxx
> in rte_eth_dev_rx_queue_setup() - it should be done on PMD side.
>
> >
> > > > If we see some of the features of such kind or other PMDs adopts the
> > > > split feature - we'll try to find the common root and consider the way how
> > to report it.
> > >
> > > My only concern with that approach will be ABI break again if
> > > something needs to exposed over rte_eth_dev_info().
>
> Let's reserve the pointer to struct rte_eth_rxseg_limitations
> in the rte_eth_dev_info to avoid ABI break?

Works for me. If we add an additional reserved field.

Due to RC1 time constraint, I am OK to leave it as a reserved filed and fill
meat when it is required if other ethdev maintainers are OK.
I will be required for feature complete.



>
> With best regards, Slava

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 10:34  3%           ` Slava Ovsiienko
@ 2020-10-15 11:09  0%             ` Andrew Rybchenko
  2020-10-15 14:39  0%               ` Slava Ovsiienko
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-10-15 11:09 UTC (permalink / raw)
  To: Slava Ovsiienko, Jerin Jacob
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
	Andrew Rybchenko

On 10/15/20 1:34 PM, Slava Ovsiienko wrote:
> Hi, Andrew
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Thursday, October 15, 2020 12:49
>> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Jerin Jacob
>> <jerinjacobk@gmail.com>
>> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
>> <thomas@monjalon.net>; Stephen Hemminger
>> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
>> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
>> <maxime.coquelin@redhat.com>; David Marchand
>> <david.marchand@redhat.com>; Andrew Rybchenko
>> <arybchenko@solarflare.com>
>> Subject: Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
>>
>> On 10/15/20 10:43 AM, Slava Ovsiienko wrote:
>>> Hi, Jerin
>>>
>>>> -----Original Message-----
>>>> From: Jerin Jacob <jerinjacobk@gmail.com>
>>>> Sent: Wednesday, October 14, 2020 21:57
>>>> To: Slava Ovsiienko <viacheslavo@nvidia.com>
>>>> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
>>>> <thomas@monjalon.net>; Stephen Hemminger
>>>> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
>>>> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
>>>> <maxime.coquelin@redhat.com>; David Marchand
>>>> <david.marchand@redhat.com>; Andrew Rybchenko
>>>> <arybchenko@solarflare.com>
>>>> Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
>>>>
>>>> On Wed, Oct 14, 2020 at 11:42 PM Viacheslav Ovsiienko
>>>> <viacheslavo@nvidia.com> wrote:
>>>>>
>>>>> The DPDK datapath in the transmit direction is very flexible.
>>>>> An application can build the multi-segment packet and manages almost
>>>>> all data aspects - the memory pools where segments are allocated
>>>>> from, the segment lengths, the memory attributes like external
>>>>> buffers, registered for DMA, etc.
>>>>>
>>>
>>> [..snip..]
>>>
>>>>> For example, let's suppose we configured the Rx queue with the
>>>>> following segments:
>>>>>     seg0 - pool0, len0=14B, off0=2
>>>>>     seg1 - pool1, len1=20B, off1=128B
>>>>>     seg2 - pool2, len2=20B, off2=0B
>>>>>     seg3 - pool3, len3=512B, off3=0B
>>>>
>>>>
>>>> Sorry for chime in late. This API lookout looks good to me.
>>>> But, I am wondering how the application can know the capability or
>>>> "limits" of struct rte_eth_rxseg structure for the specific PMD. The
>>>> other descriptor limit, it's being exposed with struct
>>>> rte_eth_dev_info::rx_desc_lim; If PMD can support a specific pattern
>>>> rather than returning the blanket error, the application should know the
>> limit.
>>>> IMO, it is better to add
>>>> struct rte_eth_rxseg *rxsegs;
>>>> unint16_t nb_max_rxsegs
>>>> in rte_eth_dev_info structure to express the capablity.
>>>> Where the en and offset can define the max offset.
>>>>
>>>> Thoughts?
>>>
>>> Moreover, there might be implied a lot of various limitations -
>>> offsets might be not supported at all or have some requirements for
>>> alignment, the similar requirements might be applied to segment size
>>> (say, ask for some granularity). Currently it is not obvious how to
>>> report all nuances, and it is supposed the limitations of this kind must be
>> documented in PMD chapter. As for mlx5 - it has no special limitations besides
>> common requirements to the regular segments.
>>>
>>> One more point - the split feature might be considered as just one of
>>> possible cases of using these segment descriptions, other features might
>> impose other (unknown for now) limitations.
>>> If we see some of the features of such kind or other PMDs adopts the
>>> split feature - we'll try to find the common root and consider the way how to
>> report it.
>>
>> At least there are few simple limitations which are easy to
>> express:
>>  1. Maximum number of segments
> We have scatter capability and we do not report the maximal number of segments,
> it is on PMD own. We could add the field to the rte_eth_dev_info, but not sure
> whether we have something special to report there even for mlx5 case.

There is always a limitation in programming and HW. Nothing is
unlimited. Limits could be high, but still exist.
Number of descriptors? Width of field in HW interface?
Maximum length of the config message to HW?
All above could limit it directly or indirectly.


>>  2. Possibility to use the last segment many times if required
>>     (I was suggesting to use scatter for it, but you rejected
>>      the idea - may be time to reconsider :) ) 
> 
> Mmm, sorry I do not follow, it might be I did not understand/missed your idea.
> Some of the last segment attributes are used multiple times to scatter the rest
> of the data in fashion very close to the existing scattering approach - at least,
> pool and buffer size from this pool are used. The beginning of the packet
> scattered according to the new descriptions, the rest of the packet -
> according to the existing regular scattering with pool settings from
> the last segment description.

I believe that the possibility to split into a fixed segments
(BUFFER_SPLIT) and possibility to use a mempool (just mp or
last segment) many times if a packet does not fit (SCATTER)
it is *different* features.
I can easily imagine HW which could do BUFFER_SPLIT to
fixed segments, but cannot use the last segment many times
(i.e. no classical SCATTER).

> 
>  3. Maximum offset
>>     Frankly speaking I'm not sure why it cannot be handled on
>>     PMD level (i.e. provide descriptors with offset taken into
>>     account or guarantee that HW mempool objects initialized
>>     correctly with required headroom). May be in some corner
>>     cases when the same HW mempool is shared by various
>>     segments with different offset requirements.
> 
> HW offsets are beyond the feature scope, the offsets in the segment
> description is supposed to be added to the native pool offsets (if any).

Are you saying that offsets are not passed to HW and just
handled by PMD to provide correct IOVA addresses to put
data to? If so, it is an implementation detail which is
specific to mlx5. If so, no specific limitations
except data room, size and offset consistency.
But it could be passed to a HW and it could be, for example,
just 8 bits for the value.

> 
>>  4. Offset alignment
>>  5. Maximum/minimum length of a segment
>>  6. Length alignment
> In which form? Mask of lsbs ? 0 means no limitations ?

log2, i.e. 0 => 1 (no limitations) 1 => 2 (even only),
6 => 64 (64-byte cache line aligned) etc.

> 
>>
>> I realize that 3, 4 and 5 could be per segment number.
>> If it is really that complex, report common denominator which is guaranteed to
>> work. If we have no checks on ethdev layer, application can ignore it if it knows
>> better.
> 
> Currently it is not clear at all what kind of limitations should be reported,
> we could include all of mentioned/proposed ones, and no one will report there -
> mlx5 has no any reasonable limitations to report for now.
> 
> Should we reserve some pointer field in the rte_eth_dev_info to report
> the limitations? (Limitation description should contain variable size array,
> depending on the number of segments, so pointer seems to be appropriate).
> It would allow us to avoid ABI break, and present the limitation structure once it is defined.

I will let other ethdev maintainers to make a decision here.
My vote would be to report limitations mentioned above.
It looks like Jerin is also interested in limitations
reporting. Not sure if my form looks OK or no.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15 10:27  3%           ` Jerin Jacob
@ 2020-10-15 10:51  3%             ` Slava Ovsiienko
  2020-10-15 11:26  0%               ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2020-10-15 10:51 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
	Andrew Rybchenko

Hi, Jerin

> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, October 15, 2020 13:28
> To: Slava Ovsiienko <viacheslavo@nvidia.com>
> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Stephen Hemminger
> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> 
[..snip..]
> 
> struct rte_eth_rxseg {
>     enum rte_eth_rxseg_mode mode ;
>     union {
>                struct rte_eth_rxseg_mode xxx {
>                               struct rte_mempool *mp; /**< Memory pool to allocate
> segment from. */
>                               uint16_t length; /**< Segment data length, configures split
> point. */
>                                uint16_t offset; /**< Data offset from beginning of mbuf data
> buffer. */
>                                uint32_t reserved; /**< Reserved field. */
>              }
> }

There is an array of rte_eth_rxseg. It would introduce multiple "enum rte_eth_rxseg_mode mode"
and would cause some ambiguity. About mode selection - please, see below.
Union seems to be good idea, let's adopt.

> 
> Another mode, Marvell PMD has it(I believe Intel also) i.e When we say:
> 
> seg0 - pool0, len0=2000B, off0=0
> seg1 - pool1, len1=2001B, off1=0
> 
> packet size up to, 2000B goes to pool 0 and if is >=2001 goes to pool1.
> I think, it is better to have mode param in rte_eth_rxseg for avoiding ABI
> changes.(Just  like clean rte_flow APIs)

It is supposed to choose with RTE_ETH_RX_OFFLOAD_xxx flags.
For packet sorting it should be something like this RTE_ETH_RX_OFFLOAD_SORT.
PMD reports it supports the feature, the flag is set in rx_conf->offloads
and rxseg structure is interpreted according to these flags.

Please, note, there is intentionally no check for RTE_ETH_RX_OFFLOAD_xxx
in rte_eth_dev_rx_queue_setup() - it should be done on PMD side.

> 
> > > If we see some of the features of such kind or other PMDs adopts the
> > > split feature - we'll try to find the common root and consider the way how
> to report it.
> >
> > My only concern with that approach will be ABI break again if
> > something needs to exposed over rte_eth_dev_info().

Let's reserve the pointer to struct rte_eth_rxseg_limitations
in the rte_eth_dev_info to avoid ABI break?

With best regards, Slava

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  @ 2020-10-15 10:34  3%           ` Slava Ovsiienko
  2020-10-15 11:09  0%             ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2020-10-15 10:34 UTC (permalink / raw)
  To: Andrew Rybchenko, Jerin Jacob
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
	Andrew Rybchenko

Hi, Andrew

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, October 15, 2020 12:49
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Jerin Jacob
> <jerinjacobk@gmail.com>
> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Stephen Hemminger
> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Subject: Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
> 
> On 10/15/20 10:43 AM, Slava Ovsiienko wrote:
> > Hi, Jerin
> >
> >> -----Original Message-----
> >> From: Jerin Jacob <jerinjacobk@gmail.com>
> >> Sent: Wednesday, October 14, 2020 21:57
> >> To: Slava Ovsiienko <viacheslavo@nvidia.com>
> >> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; Stephen Hemminger
> >> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> >> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> >> <maxime.coquelin@redhat.com>; David Marchand
> >> <david.marchand@redhat.com>; Andrew Rybchenko
> >> <arybchenko@solarflare.com>
> >> Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> >>
> >> On Wed, Oct 14, 2020 at 11:42 PM Viacheslav Ovsiienko
> >> <viacheslavo@nvidia.com> wrote:
> >>>
> >>> The DPDK datapath in the transmit direction is very flexible.
> >>> An application can build the multi-segment packet and manages almost
> >>> all data aspects - the memory pools where segments are allocated
> >>> from, the segment lengths, the memory attributes like external
> >>> buffers, registered for DMA, etc.
> >>>
> >
> > [..snip..]
> >
> >>> For example, let's suppose we configured the Rx queue with the
> >>> following segments:
> >>>     seg0 - pool0, len0=14B, off0=2
> >>>     seg1 - pool1, len1=20B, off1=128B
> >>>     seg2 - pool2, len2=20B, off2=0B
> >>>     seg3 - pool3, len3=512B, off3=0B
> >>
> >>
> >> Sorry for chime in late. This API lookout looks good to me.
> >> But, I am wondering how the application can know the capability or
> >> "limits" of struct rte_eth_rxseg structure for the specific PMD. The
> >> other descriptor limit, it's being exposed with struct
> >> rte_eth_dev_info::rx_desc_lim; If PMD can support a specific pattern
> >> rather than returning the blanket error, the application should know the
> limit.
> >> IMO, it is better to add
> >> struct rte_eth_rxseg *rxsegs;
> >> unint16_t nb_max_rxsegs
> >> in rte_eth_dev_info structure to express the capablity.
> >> Where the en and offset can define the max offset.
> >>
> >> Thoughts?
> >
> > Moreover, there might be implied a lot of various limitations -
> > offsets might be not supported at all or have some requirements for
> > alignment, the similar requirements might be applied to segment size
> > (say, ask for some granularity). Currently it is not obvious how to
> > report all nuances, and it is supposed the limitations of this kind must be
> documented in PMD chapter. As for mlx5 - it has no special limitations besides
> common requirements to the regular segments.
> >
> > One more point - the split feature might be considered as just one of
> > possible cases of using these segment descriptions, other features might
> impose other (unknown for now) limitations.
> > If we see some of the features of such kind or other PMDs adopts the
> > split feature - we'll try to find the common root and consider the way how to
> report it.
> 
> At least there are few simple limitations which are easy to
> express:
>  1. Maximum number of segments
We have scatter capability and we do not report the maximal number of segments,
it is on PMD own. We could add the field to the rte_eth_dev_info, but not sure
whether we have something special to report there even for mlx5 case.


>  2. Possibility to use the last segment many times if required
>     (I was suggesting to use scatter for it, but you rejected
>      the idea - may be time to reconsider :) ) 

Mmm, sorry I do not follow, it might be I did not understand/missed your idea.
Some of the last segment attributes are used multiple times to scatter the rest
of the data in fashion very close to the existing scattering approach - at least,
pool and buffer size from this pool are used. The beginning of the packet
scattered according to the new descriptions, the rest of the packet -
according to the existing regular scattering with pool settings from
the last segment description.

 3. Maximum offset
>     Frankly speaking I'm not sure why it cannot be handled on
>     PMD level (i.e. provide descriptors with offset taken into
>     account or guarantee that HW mempool objects initialized
>     correctly with required headroom). May be in some corner
>     cases when the same HW mempool is shared by various
>     segments with different offset requirements.

HW offsets are beyond the feature scope, the offsets in the segment
description is supposed to be added to the native pool offsets (if any).

>  4. Offset alignment
>  5. Maximum/minimum length of a segment
>  6. Length alignment
In which form? Mask of lsbs ? 0 means no limitations ?

> 
> I realize that 3, 4 and 5 could be per segment number.
> If it is really that complex, report common denominator which is guaranteed to
> work. If we have no checks on ethdev layer, application can ignore it if it knows
> better.

Currently it is not clear at all what kind of limitations should be reported,
we could include all of mentioned/proposed ones, and no one will report there -
mlx5 has no any reasonable limitations to report for now.

Should we reserve some pointer field in the rte_eth_dev_info to report
the limitations? (Limitation description should contain variable size array,
depending on the number of segments, so pointer seems to be appropriate).
It would allow us to avoid ABI break, and present the limitation structure once it is defined.

With best regards, Slava


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  2020-10-15  9:27  3%         ` Jerin Jacob
@ 2020-10-15 10:27  3%           ` Jerin Jacob
  2020-10-15 10:51  3%             ` Slava Ovsiienko
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-15 10:27 UTC (permalink / raw)
  To: Slava Ovsiienko
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
	Andrew Rybchenko

On Thu, Oct 15, 2020 at 2:57 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Thu, Oct 15, 2020 at 1:13 PM Slava Ovsiienko <viacheslavo@nvidia.com> wrote:
> >
> > Hi, Jerin
>
> Hi Slava,
>
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Wednesday, October 14, 2020 21:57
> > > To: Slava Ovsiienko <viacheslavo@nvidia.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> > > <thomas@monjalon.net>; Stephen Hemminger
> > > <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> > > <maxime.coquelin@redhat.com>; David Marchand
> > > <david.marchand@redhat.com>; Andrew Rybchenko
> > > <arybchenko@solarflare.com>
> > > Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> > >
> > > On Wed, Oct 14, 2020 at 11:42 PM Viacheslav Ovsiienko
> > > <viacheslavo@nvidia.com> wrote:
> > > >
> > > > The DPDK datapath in the transmit direction is very flexible.
> > > > An application can build the multi-segment packet and manages almost
> > > > all data aspects - the memory pools where segments are allocated from,
> > > > the segment lengths, the memory attributes like external buffers,
> > > > registered for DMA, etc.
> > > >
> >
> > [..snip..]
> >
> > > > For example, let's suppose we configured the Rx queue with the
> > > > following segments:
> > > >     seg0 - pool0, len0=14B, off0=2
> > > >     seg1 - pool1, len1=20B, off1=128B
> > > >     seg2 - pool2, len2=20B, off2=0B
> > > >     seg3 - pool3, len3=512B, off3=0B
> > >
> > >
> > > Sorry for chime in late. This API lookout looks good to me.
> > > But, I am wondering how the application can know the capability or "limits" of
> > > struct rte_eth_rxseg structure for the specific PMD. The other descriptor limit,
> > > it's being exposed with struct rte_eth_dev_info::rx_desc_lim; If PMD can
> > > support a specific pattern rather than returning the blanket error, the
> > > application should know the limit.
> > > IMO, it is better to add
> > > struct rte_eth_rxseg *rxsegs;
> > > unint16_t nb_max_rxsegs
> > > in rte_eth_dev_info structure to express the capablity.
> > > Where the en and offset can define the max offset.
> > >
> > > Thoughts?
> >
> > Moreover, there might be implied a lot of various limitations - offsets might be not supported at all or
> > have some requirements for alignment, the similar requirements might be applied to segment size
> > (say, ask for some granularity). Currently it is not obvious how to report all nuances, and it is supposed
> > the limitations of this kind must be documented in PMD chapter. As for mlx5 - it has no special
> > limitations besides common requirements to the regular segments.
>
> Reporting the limitation in the documentation will not help for the
> generic applications.
>
> >
> > One more point - the split feature might be considered as just one of possible cases of using
> > these segment descriptions, other features might impose other (unknown for now) limitations.

Also , I agree that w will have multiple use cases with segment descriptors.
In order to make it future proof on the API definion is better to have
from:
struct rte_eth_rxseg {
   struct rte_mempool *mp; /**< Memory pool to allocate segment from. */
  uint16_t length; /**< Segment data length, configures split point. */
  uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */
  uint32_t reserved; /**< Reserved field. */
};
something lime below:

struct rte_eth_rxseg {
    enum rte_eth_rxseg_mode mode ;
    union {
               struct rte_eth_rxseg_mode xxx {
                              struct rte_mempool *mp; /**< Memory pool
to allocate segment from. */
                              uint16_t length; /**< Segment data
length, configures split point. */
                               uint16_t offset; /**< Data offset from
beginning of mbuf data buffer. */
                               uint32_t reserved; /**< Reserved field. */
             }
}

Another mode, Marvell PMD has it(I believe Intel also) i.e
When we say:

seg0 - pool0, len0=2000B, off0=0
seg1 - pool1, len1=2001B, off1=0

packet size up to, 2000B goes to pool 0 and if is >=2001 goes to pool1.
I think, it is better to have mode param in rte_eth_rxseg for avoiding
ABI changes.(Just  like clean rte_flow APIs)

> > If we see some of the features of such kind or other PMDs adopts the split feature - we'll try to find
> > the common root and consider the way how to report it.
>
> My only concern with that approach will be ABI break again if
> something needs to exposed over rte_eth_dev_info().
> IMO, if we featured needs to completed only when its capabilities are
> exposed in a programmatic manner.
> As of mlx5, if there not limitation then info
> rte_eth_dev_info::rxsegs[x].len, offset etc as UINT16_MAX so
> that application is aware of the state.
>
> >
> > With best regards, Slava
> >

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh
  2020-10-14 10:41 26%       ` [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
@ 2020-10-15 10:16  4%         ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-15 10:16 UTC (permalink / raw)
  To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev



On 14/10/2020 11:41, Conor Walsh wrote:
> The core reason for this patch is to reduce the amount of time needed to
> run abi checks. The number of abi checks being run has been reduced to
> only 2 (1 x86_64 and 1 arm). The script can now also take adavtage of
> prebuilt abi references.
> 
> Invoke using "./test-meson-builds.sh [-b <build directory>]
>    [-a <dpdk tag or latest for abi check>] [-u <uri for abi references>]
>    [-d <directory for abi references>]"
>  - <build directory>: directory to store builds (relative or absolute)
>  - <dpdk tag or latest for abi check>: dpdk tag e.g. "v20.11" or "latest"
>  - <uri for abi references>: http location or directory to get prebuilt
>    abi references from
>  - <directory for abi references>: directory to store abi references
>    (relative or absolute)
> e.g. "./test-meson-builds.sh -a latest"
> If no flags are specified test-meson-builds.sh will run the standard
> meson tests with default options unless environmental variables are
> specified.
> 
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives
  2020-10-14 10:41 21%       ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
@ 2020-10-15 10:15  4%         ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-15 10:15 UTC (permalink / raw)
  To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev



On 14/10/2020 11:41, Conor Walsh wrote:
> This patch adds a script that generates compressed archives
> containing .dump files which can be used to perform abi
> breakage checking in test-meson-build.sh.
> 
> Invoke using "./gen-abi-tarballs.sh [-v <dpdk tag>]"
>  - <dpdk tag>: dpdk tag e.g. "v20.11" or "latest"
> e.g. "./gen-abi-tarballs.sh -v latest"
> 
> If no tag is specified, the script will default to "latest"
> Using these parameters the script will produce several *.tar.gz
> archives containing .dump files required to do abi breakage checking
> 
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node
  2020-10-15 10:08  0% ` David Marchand
@ 2020-10-15 10:10  3%   ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-15 10:10 UTC (permalink / raw)
  To: David Marchand
  Cc: Declan Doherty, Neil Horman, Anoob Joseph, Fiona Trahe,
	Akhil Goyal, Arek Kusztal, Thomas Monjalon, dev

It is 100% ... my mistake, I was checking for ABI snafus.

Ray K

On 15/10/2020 11:08, David Marchand wrote:
> On Thu, Oct 15, 2020 at 11:59 AM Ray Kinsella <mdr@ashroe.eu> wrote:
>>
>> Function versioning to preserve the ABI was added to crytodev in
>> commit a0f0de06d457 ("cryptodev: fix ABI compatibility for
>> ChaCha20-Poly1305").  This is no longer required in the DPDK_21
>> version node.
> 
> Is it a duplicate for [1]?
> 
> 1: https://git.dpdk.org/next/dpdk-next-crypto/commit/lib/librte_cryptodev?id=e43f809f3a59a06f2bc80a2a6fe0c133f9e401fe
> 
> 

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node
  2020-10-15  9:56 11% [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node Ray Kinsella
@ 2020-10-15 10:08  0% ` David Marchand
  2020-10-15 10:10  3%   ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-10-15 10:08 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: Declan Doherty, Neil Horman, Anoob Joseph, Fiona Trahe,
	Akhil Goyal, Arek Kusztal, Thomas Monjalon, dev

On Thu, Oct 15, 2020 at 11:59 AM Ray Kinsella <mdr@ashroe.eu> wrote:
>
> Function versioning to preserve the ABI was added to crytodev in
> commit a0f0de06d457 ("cryptodev: fix ABI compatibility for
> ChaCha20-Poly1305").  This is no longer required in the DPDK_21
> version node.

Is it a duplicate for [1]?

1: https://git.dpdk.org/next/dpdk-next-crypto/commit/lib/librte_cryptodev?id=e43f809f3a59a06f2bc80a2a6fe0c133f9e401fe


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node
@ 2020-10-15  9:56 11% Ray Kinsella
  2020-10-15 10:08  0% ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2020-10-15  9:56 UTC (permalink / raw)
  To: Declan Doherty, Ray Kinsella, Neil Horman, Anoob Joseph,
	Fiona Trahe, Akhil Goyal, Arek Kusztal
  Cc: thomas, david.marchand, dev

Function versioning to preserve the ABI was added to crytodev in
commit a0f0de06d457 ("cryptodev: fix ABI compatibility for
ChaCha20-Poly1305").  This is no longer required in the DPDK_21
version node.

Fixes: b922dbd38ced ("cryptodev: add ChaCha20-Poly1305 AEAD algorithm")

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
 lib/librte_cryptodev/rte_cryptodev.c          | 139 +-----------------
 lib/librte_cryptodev/rte_cryptodev.h          |  33 -----
 .../rte_cryptodev_version.map                 |   6 -
 3 files changed, 4 insertions(+), 174 deletions(-)

diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..a74daee46 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -36,8 +36,6 @@
 #include <rte_errno.h>
 #include <rte_spinlock.h>
 #include <rte_string_fns.h>
-#include <rte_compat.h>
-#include <rte_function_versioning.h>
 
 #include "rte_crypto.h"
 #include "rte_cryptodev.h"
@@ -59,11 +57,6 @@ static struct rte_cryptodev_global cryptodev_globals = {
 /* spinlock for crypto device callbacks */
 static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
-static const struct rte_cryptodev_capabilities
-		cryptodev_undefined_capabilities[] = {
-		RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
 static struct rte_cryptodev_capabilities
 		*capability_copy[RTE_CRYPTO_MAX_DEVS];
 static uint8_t is_capability_checked[RTE_CRYPTO_MAX_DEVS];
@@ -291,43 +284,8 @@ rte_crypto_auth_operation_strings[] = {
 		[RTE_CRYPTO_AUTH_OP_GENERATE]	= "generate"
 };
 
-const struct rte_cryptodev_symmetric_capability __vsym *
-rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
-		const struct rte_cryptodev_sym_capability_idx *idx)
-{
-	const struct rte_cryptodev_capabilities *capability;
-	struct rte_cryptodev_info dev_info;
-	int i = 0;
-
-	rte_cryptodev_info_get_v20(dev_id, &dev_info);
-
-	while ((capability = &dev_info.capabilities[i++])->op !=
-			RTE_CRYPTO_OP_TYPE_UNDEFINED) {
-		if (capability->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
-			continue;
-
-		if (capability->sym.xform_type != idx->type)
-			continue;
-
-		if (idx->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
-			capability->sym.auth.algo == idx->algo.auth)
-			return &capability->sym;
-
-		if (idx->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
-			capability->sym.cipher.algo == idx->algo.cipher)
-			return &capability->sym;
-
-		if (idx->type == RTE_CRYPTO_SYM_XFORM_AEAD &&
-				capability->sym.aead.algo == idx->algo.aead)
-			return &capability->sym;
-	}
-
-	return NULL;
-}
-VERSION_SYMBOL(rte_cryptodev_sym_capability_get, _v20, 20.0);
-
-const struct rte_cryptodev_symmetric_capability __vsym *
-rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
+const struct rte_cryptodev_symmetric_capability *
+rte_cryptodev_sym_capability_get(uint8_t dev_id,
 		const struct rte_cryptodev_sym_capability_idx *idx)
 {
 	const struct rte_cryptodev_capabilities *capability;
@@ -359,11 +317,6 @@ rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
 
 	return NULL;
 }
-MAP_STATIC_SYMBOL(const struct rte_cryptodev_symmetric_capability *
-		rte_cryptodev_sym_capability_get(uint8_t dev_id,
-		const struct rte_cryptodev_sym_capability_idx *idx),
-		rte_cryptodev_sym_capability_get_v21);
-BIND_DEFAULT_SYMBOL(rte_cryptodev_sym_capability_get, _v21, 21);
 
 static int
 param_range_check(uint16_t size, const struct rte_crypto_param_range *range)
@@ -1233,89 +1186,8 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
 	(*dev->dev_ops->stats_reset)(dev);
 }
 
-static void
-get_v20_capabilities(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
-{
-	const struct rte_cryptodev_capabilities *capability;
-	uint8_t found_invalid_capa = 0;
-	uint8_t counter = 0;
-
-	for (capability = dev_info->capabilities;
-			capability->op != RTE_CRYPTO_OP_TYPE_UNDEFINED;
-			++capability, ++counter) {
-		if (capability->op == RTE_CRYPTO_OP_TYPE_SYMMETRIC &&
-				capability->sym.xform_type ==
-					RTE_CRYPTO_SYM_XFORM_AEAD
-				&& capability->sym.aead.algo >=
-				RTE_CRYPTO_AEAD_CHACHA20_POLY1305) {
-			found_invalid_capa = 1;
-			counter--;
-		}
-	}
-	is_capability_checked[dev_id] = 1;
-	if (!found_invalid_capa)
-		return;
-	capability_copy[dev_id] = malloc(counter *
-		sizeof(struct rte_cryptodev_capabilities));
-	if (capability_copy[dev_id] == NULL) {
-		 /*
-		  * error case - no memory to store the trimmed
-		  * list, so have to return an empty list
-		  */
-		dev_info->capabilities =
-			cryptodev_undefined_capabilities;
-		is_capability_checked[dev_id] = 0;
-	} else {
-		counter = 0;
-		for (capability = dev_info->capabilities;
-				capability->op !=
-				RTE_CRYPTO_OP_TYPE_UNDEFINED;
-				capability++) {
-			if (!(capability->op ==
-				RTE_CRYPTO_OP_TYPE_SYMMETRIC
-				&& capability->sym.xform_type ==
-				RTE_CRYPTO_SYM_XFORM_AEAD
-				&& capability->sym.aead.algo >=
-				RTE_CRYPTO_AEAD_CHACHA20_POLY1305)) {
-				capability_copy[dev_id][counter++] =
-						*capability;
-			}
-		}
-		dev_info->capabilities =
-				capability_copy[dev_id];
-	}
-}
-
-void __vsym
-rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
-{
-	struct rte_cryptodev *dev;
-
-	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
-		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
-		return;
-	}
-
-	dev = &rte_crypto_devices[dev_id];
-
-	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
-
-	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
-	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
-
-	if (capability_copy[dev_id] == NULL) {
-		if (!is_capability_checked[dev_id])
-			get_v20_capabilities(dev_id, dev_info);
-	} else
-		dev_info->capabilities = capability_copy[dev_id];
-
-	dev_info->driver_name = dev->device->driver->name;
-	dev_info->device = dev->device;
-}
-VERSION_SYMBOL(rte_cryptodev_info_get, _v20, 20.0);
-
-void __vsym
-rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
 {
 	struct rte_cryptodev *dev;
 
@@ -1334,9 +1206,6 @@ rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
 	dev_info->driver_name = dev->device->driver->name;
 	dev_info->device = dev->device;
 }
-MAP_STATIC_SYMBOL(void rte_cryptodev_info_get(uint8_t dev_id,
-	struct rte_cryptodev_info *dev_info), rte_cryptodev_info_get_v21);
-BIND_DEFAULT_SYMBOL(rte_cryptodev_info_get, _v21, 21);
 
 int
 rte_cryptodev_callback_register(uint8_t dev_id,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..f4767b52c 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -219,14 +219,6 @@ struct rte_cryptodev_asym_capability_idx {
  *   - Return NULL if the capability not exist.
  */
 const struct rte_cryptodev_symmetric_capability *
-rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
-		const struct rte_cryptodev_sym_capability_idx *idx);
-
-const struct rte_cryptodev_symmetric_capability *
-rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
-		const struct rte_cryptodev_sym_capability_idx *idx);
-
-const struct rte_cryptodev_symmetric_capability *
 rte_cryptodev_sym_capability_get(uint8_t dev_id,
 		const struct rte_cryptodev_sym_capability_idx *idx);
 
@@ -789,34 +781,9 @@ rte_cryptodev_stats_reset(uint8_t dev_id);
  * the last valid element has it's op field set to
  * RTE_CRYPTO_OP_TYPE_UNDEFINED.
  */
-
 void
 rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
 
-/* An extra element RTE_CRYPTO_AEAD_CHACHA20_POLY1305 is added
- * to enum rte_crypto_aead_algorithm, also changing the value of
- *  RTE_CRYPTO_AEAD_LIST_END. To maintain ABI compatibility with applications
- * which linked against earlier versions, preventing them, for example, from
- * picking up the new value and using it to index into an array sized too small
- * for it, it is necessary to have two versions of rte_cryptodev_info_get()
- * The latest version just returns directly the capabilities retrieved from
- * the device. The compatible version inspects the capabilities retrieved
- * from the device, but only returns them directly if the new value
- * is not included. If the new value is included, it allocates space
- * for a copy of the device capabilities, trims the new value from this
- * and returns this copy. It only needs to do this once per device.
- * For the corner case of a corner case when the alloc may fail,
- * an empty capability list is returned, as there is no mechanism to return
- * an error and adding such a mechanism would itself be an ABI breakage.
- * The compatible version can be removed after the next major ABI release.
- */
-
-void
-rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
-
-void
-rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
-
 /**
  * Register a callback function for specific device id.
  *
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..7727286ac 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -58,12 +58,6 @@ DPDK_21 {
 	local: *;
 };
 
-DPDK_20.0 {
-	global:
-	rte_cryptodev_info_get;
-	rte_cryptodev_sym_capability_get;
-};
-
 EXPERIMENTAL {
 	global:
 
-- 
2.23.0


^ permalink raw reply	[relevance 11%]

* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
  @ 2020-10-15  9:27  3%         ` Jerin Jacob
  2020-10-15 10:27  3%           ` Jerin Jacob
    1 sibling, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-15  9:27 UTC (permalink / raw)
  To: Slava Ovsiienko
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
	Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
	Andrew Rybchenko

On Thu, Oct 15, 2020 at 1:13 PM Slava Ovsiienko <viacheslavo@nvidia.com> wrote:
>
> Hi, Jerin

Hi Slava,

>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Wednesday, October 14, 2020 21:57
> > To: Slava Ovsiienko <viacheslavo@nvidia.com>
> > Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> > <thomas@monjalon.net>; Stephen Hemminger
> > <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> > <maxime.coquelin@redhat.com>; David Marchand
> > <david.marchand@redhat.com>; Andrew Rybchenko
> > <arybchenko@solarflare.com>
> > Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> >
> > On Wed, Oct 14, 2020 at 11:42 PM Viacheslav Ovsiienko
> > <viacheslavo@nvidia.com> wrote:
> > >
> > > The DPDK datapath in the transmit direction is very flexible.
> > > An application can build the multi-segment packet and manages almost
> > > all data aspects - the memory pools where segments are allocated from,
> > > the segment lengths, the memory attributes like external buffers,
> > > registered for DMA, etc.
> > >
>
> [..snip..]
>
> > > For example, let's suppose we configured the Rx queue with the
> > > following segments:
> > >     seg0 - pool0, len0=14B, off0=2
> > >     seg1 - pool1, len1=20B, off1=128B
> > >     seg2 - pool2, len2=20B, off2=0B
> > >     seg3 - pool3, len3=512B, off3=0B
> >
> >
> > Sorry for chime in late. This API lookout looks good to me.
> > But, I am wondering how the application can know the capability or "limits" of
> > struct rte_eth_rxseg structure for the specific PMD. The other descriptor limit,
> > it's being exposed with struct rte_eth_dev_info::rx_desc_lim; If PMD can
> > support a specific pattern rather than returning the blanket error, the
> > application should know the limit.
> > IMO, it is better to add
> > struct rte_eth_rxseg *rxsegs;
> > unint16_t nb_max_rxsegs
> > in rte_eth_dev_info structure to express the capablity.
> > Where the en and offset can define the max offset.
> >
> > Thoughts?
>
> Moreover, there might be implied a lot of various limitations - offsets might be not supported at all or
> have some requirements for alignment, the similar requirements might be applied to segment size
> (say, ask for some granularity). Currently it is not obvious how to report all nuances, and it is supposed
> the limitations of this kind must be documented in PMD chapter. As for mlx5 - it has no special
> limitations besides common requirements to the regular segments.

Reporting the limitation in the documentation will not help for the
generic applications.

>
> One more point - the split feature might be considered as just one of possible cases of using
> these segment descriptions, other features might impose other (unknown for now) limitations.
> If we see some of the features of such kind or other PMDs adopts the split feature - we'll try to find
> the common root and consider the way how to report it.

My only concern with that approach will be ABI break again if
something needs to exposed over rte_eth_dev_info().
IMO, if we featured needs to completed only when its capabilities are
exposed in a programmatic manner.
As of mlx5, if there not limitation then info
rte_eth_dev_info::rxsegs[x].len, offset etc as UINT16_MAX so
that application is aware of the state.

>
> With best regards, Slava
>

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v5 2/5] ethdev: add new attributes to hairpin config
  @ 2020-10-15  5:35  4%   ` Bing Zhao
  0 siblings, 0 replies; 200+ results
From: Bing Zhao @ 2020-10-15  5:35 UTC (permalink / raw)
  To: thomas, orika, ferruh.yigit, arybchenko, mdr, nhorman,
	bernard.iremonger, beilei.xing, wenzhuo.lu
  Cc: dev

To support two ports hairpin mode and keep the backward compatibility
for the application, two new attribute members of the hairpin queue
configuration structure will be added.

`tx_explicit` means if the application itself will insert the TX part
flow rules. If not set, PMD will insert the rules implicitly.
`manual_bind` means if the hairpin TX queue and peer RX queue will be
bound automatically during the device start stage.

Different TX and RX queue pairs could have different values, but it
is highly recommended that all paired queues between one egress and
its peer ingress ports have the same values, in order not to bring
any chaos to the system. The actual support of these attribute
parameters will be checked and decided by the PMD drivers.

In the single port hairpin, if both are zero without any setting, the
behavior will remain the same as before. It means that no bind API
needs to be called and no TX flow rules need to be inserted manually
by the application.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
v4: squash document update and more info for the two new attributes
v2: optimize the structure and remove unused macros
---
 doc/guides/prog_guide/rte_flow.rst     |  3 +++
 doc/guides/rel_notes/release_20_11.rst |  6 ++++++
 lib/librte_ethdev/rte_ethdev.c         |  8 ++++----
 lib/librte_ethdev/rte_ethdev.h         | 27 ++++++++++++++++++++++++++-
 4 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index f26a6c2..c6f828a 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2592,6 +2592,9 @@ set, unpredictable value will be seen depending on driver implementation. For
 loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to
 the other path depending on HW capability.
 
+In hairpin case with TX explicit flow mode, metadata could (not mandatory) be
+used to connect the RX and TX flows if it can be propagated from RX to TX path.
+
 .. _table_rte_flow_action_set_meta:
 
 .. table:: SET_META
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0a9ae54..2e7dc2d 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -70,6 +70,7 @@ New Features
 * **Updated the ethdev library to support hairpin between two ports.**
 
   New APIs are introduced to support binding / unbinding 2 ports hairpin.
+  Hairpin TX part flow rules can be inserted explicitly.
 
 * **Updated Broadcom bnxt driver.**
 
@@ -355,6 +356,11 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * ``struct rte_eth_hairpin_conf`` has two new members:
+
+    * ``uint32_t tx_explicit:1;``
+    * ``uint32_t manual_bind:1;``
+
 
 Known Issues
 ------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 150c555..3cde7a7 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -2004,13 +2004,13 @@ struct rte_eth_dev *
 	}
 	if (conf->peer_count > cap.max_rx_2_tx) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Rx queue(=%hu), should be: <= %hu",
+			"Invalid value for number of peers for Rx queue(=%u), should be: <= %hu",
 			conf->peer_count, cap.max_rx_2_tx);
 		return -EINVAL;
 	}
 	if (conf->peer_count == 0) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Rx queue(=%hu), should be: > 0",
+			"Invalid value for number of peers for Rx queue(=%u), should be: > 0",
 			conf->peer_count);
 		return -EINVAL;
 	}
@@ -2175,13 +2175,13 @@ struct rte_eth_dev *
 	}
 	if (conf->peer_count > cap.max_tx_2_rx) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Tx queue(=%hu), should be: <= %hu",
+			"Invalid value for number of peers for Tx queue(=%u), should be: <= %hu",
 			conf->peer_count, cap.max_tx_2_rx);
 		return -EINVAL;
 	}
 	if (conf->peer_count == 0) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Tx queue(=%hu), should be: > 0",
+			"Invalid value for number of peers for Tx queue(=%u), should be: > 0",
 			conf->peer_count);
 		return -EINVAL;
 	}
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 3bdb189..dabbbd4 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1045,7 +1045,32 @@ struct rte_eth_hairpin_peer {
  * A structure used to configure hairpin binding.
  */
 struct rte_eth_hairpin_conf {
-	uint16_t peer_count; /**< The number of peers. */
+	uint32_t peer_count:16; /**< The number of peers. */
+
+	/**
+	 * Explicit TX flow rule mode. One hairpin pair of queues should have
+	 * the same attribute. The actual support depends on the PMD.
+	 *
+	 * - When set, the user should be responsible for inserting the hairpin
+	 *   TX part flows and removing them.
+	 * - When clear, the PMD will try to handle the TX part of the flows,
+	 *   e.g., by splitting one flow into two parts.
+	 */
+	uint32_t tx_explicit:1;
+
+	/**
+	 * Manually bind hairpin queues. One hairpin pair of queues should have
+	 * the same attribute. The actual support depends on the PMD.
+	 *
+	 * - When set, to enable hairpin, the user should call the hairpin bind
+	 *   API after all the queues are set up properly and the ports are
+	 *   started. Also, the hairpin unbind API should be called accordingly
+	 *   before stopping a port that with hairpin configured.
+	 * - When clear, the PMD will try to enable the hairpin with the queues
+	 *   configured automatically during port start.
+	 */
+	uint32_t manual_bind:1;
+	uint32_t reserved:14; /**< Reserved bits. */
 	struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS];
 };
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3] security: update session create API
  2020-10-14 18:56  2%   ` [dpdk-dev] [PATCH v3] " Akhil Goyal
@ 2020-10-15  1:11  0%     ` Lukasz Wojciechowski
  0 siblings, 0 replies; 200+ results
From: Lukasz Wojciechowski @ 2020-10-15  1:11 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, mdr, anoobj, hemant.agrawal, konstantin.ananyev,
	declan.doherty, radu.nicolau, david.coyle,
	"'Lukasz Wojciechowski'",

Hi Akhil,

thank you for responding to review and for v3.

You patch currently does not apply:
dpdk$ git apply v3-security-update-session-create-API.patch
error: patch failed: doc/guides/rel_notes/deprecation.rst:164
error: doc/guides/rel_notes/deprecation.rst: patch does not apply
error: patch failed: doc/guides/rel_notes/release_20_11.rst:344
error: doc/guides/rel_notes/release_20_11.rst: patch does not apply

and I'm sorry but there are still few things - see inline comments

W dniu 14.10.2020 o 20:56, Akhil Goyal pisze:
> The API ``rte_security_session_create`` takes only single
> mempool for session and session private data. So the
> application need to create mempool for twice the number of
> sessions needed and will also lead to wastage of memory as
> session private data need more memory compared to session.
> Hence the API is modified to take two mempool pointers
> - one for session and one for private data.
> This is very similar to crypto based session create APIs.
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
> Changes in v3:
> fixed checkpatch issues.
> Added new test in test_security.c for priv_mempool
>
> Changes in V2:
> incorporated comments from Lukasz and David.
>
>   app/test-crypto-perf/cperf_ops.c       |   4 +-
>   app/test-crypto-perf/main.c            |  12 +-
>   app/test/test_cryptodev.c              |  18 ++-
>   app/test/test_ipsec.c                  |   3 +-
>   app/test/test_security.c               | 160 ++++++++++++++++++++++---
>   doc/guides/prog_guide/rte_security.rst |   8 +-
>   doc/guides/rel_notes/deprecation.rst   |   7 --
>   doc/guides/rel_notes/release_20_11.rst |   6 +
>   examples/ipsec-secgw/ipsec-secgw.c     |  12 +-
>   examples/ipsec-secgw/ipsec.c           |   9 +-
>   lib/librte_security/rte_security.c     |   7 +-
>   lib/librte_security/rte_security.h     |   4 +-
>   12 files changed, 196 insertions(+), 54 deletions(-)
>
> diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
> index 3da835a9c..3a64a2c34 100644
> --- a/app/test-crypto-perf/cperf_ops.c
> +++ b/app/test-crypto-perf/cperf_ops.c
> @@ -621,7 +621,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>   
>   		/* Create security session */
>   		return (void *)rte_security_session_create(ctx,
> -					&sess_conf, sess_mp);
> +					&sess_conf, sess_mp, priv_mp);
>   	}
>   	if (options->op_type == CPERF_DOCSIS) {
>   		enum rte_security_docsis_direction direction;
> @@ -664,7 +664,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>   
>   		/* Create security session */
>   		return (void *)rte_security_session_create(ctx,
> -					&sess_conf, priv_mp);
> +					&sess_conf, sess_mp, priv_mp);
>   	}
>   #endif
>   	sess = rte_cryptodev_sym_session_create(sess_mp);
> diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
> index 62ae6048b..53864ffdd 100644
> --- a/app/test-crypto-perf/main.c
> +++ b/app/test-crypto-perf/main.c
> @@ -156,7 +156,14 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
>   		if (sess_size > max_sess_size)
>   			max_sess_size = sess_size;
>   	}
> -
> +#ifdef RTE_LIBRTE_SECURITY
> +	for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) {
> +		sess_size = rte_security_session_get_size(
> +				rte_cryptodev_get_sec_ctx(cdev_id));
> +		if (sess_size > max_sess_size)
> +			max_sess_size = sess_size;
> +	}
> +#endif
>   	/*
>   	 * Calculate number of needed queue pairs, based on the amount
>   	 * of available number of logical cores and crypto devices.
> @@ -247,8 +254,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
>   				opts->nb_qps * nb_slaves;
>   #endif
>   		} else
> -			sessions_needed = enabled_cdev_count *
> -						opts->nb_qps * 2;
> +			sessions_needed = enabled_cdev_count * opts->nb_qps;
>   
>   		/*
>   		 * A single session is required per queue pair
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index c7975ed01..9f1b92c51 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -773,9 +773,15 @@ testsuite_setup(void)
>   	unsigned int session_size =
>   		rte_cryptodev_sym_get_private_session_size(dev_id);
>   
> +#ifdef RTE_LIBRTE_SECURITY
> +	unsigned int security_session_size = rte_security_session_get_size(
> +			rte_cryptodev_get_sec_ctx(dev_id));
> +
> +	if (session_size < security_session_size)
> +		session_size = security_session_size;
> +#endif
>   	/*
> -	 * Create mempool with maximum number of sessions * 2,
> -	 * to include the session headers
> +	 * Create mempool with maximum number of sessions.
>   	 */
>   	if (info.sym.max_nb_sessions != 0 &&
>   			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
> @@ -7751,7 +7757,8 @@ test_pdcp_proto(int i, int oop,
>   
>   	/* Create security session */
>   	ut_params->sec_session = rte_security_session_create(ctx,
> -				&sess_conf, ts_params->session_priv_mpool);
> +				&sess_conf, ts_params->session_mpool,
> +				ts_params->session_priv_mpool);
>   
>   	if (!ut_params->sec_session) {
>   		printf("TestCase %s()-%d line %d failed %s: ",
> @@ -8011,7 +8018,8 @@ test_pdcp_proto_SGL(int i, int oop,
>   
>   	/* Create security session */
>   	ut_params->sec_session = rte_security_session_create(ctx,
> -				&sess_conf, ts_params->session_priv_mpool);
> +				&sess_conf, ts_params->session_mpool,
> +				ts_params->session_priv_mpool);
>   
>   	if (!ut_params->sec_session) {
>   		printf("TestCase %s()-%d line %d failed %s: ",
> @@ -8368,6 +8376,7 @@ test_docsis_proto_uplink(int i, struct docsis_test_data *d_td)
>   
>   	/* Create security session */
>   	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
> +					ts_params->session_mpool,
>   					ts_params->session_priv_mpool);
>   
>   	if (!ut_params->sec_session) {
> @@ -8543,6 +8552,7 @@ test_docsis_proto_downlink(int i, struct docsis_test_data *d_td)
>   
>   	/* Create security session */
>   	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
> +					ts_params->session_mpool,
>   					ts_params->session_priv_mpool);
>   
>   	if (!ut_params->sec_session) {
> diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
> index 79d00d7e0..9ad07a179 100644
> --- a/app/test/test_ipsec.c
> +++ b/app/test/test_ipsec.c
> @@ -632,7 +632,8 @@ create_dummy_sec_session(struct ipsec_unitest_params *ut,
>   	static struct rte_security_session_conf conf;
>   
>   	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
> -					&conf, qp->mp_session_private);
> +					&conf, qp->mp_session,
> +					qp->mp_session_private);
>   
>   	if (ut->ss[j].security.ses == NULL)
>   		return -ENOMEM;
> diff --git a/app/test/test_security.c b/app/test/test_security.c
> index 77fd5adc6..35ed6ff10 100644
> --- a/app/test/test_security.c
> +++ b/app/test/test_security.c
> @@ -200,6 +200,24 @@
>   			expected_mempool_usage, mempool_usage);		\
>   } while (0)
>   
> +/**
> + * Verify usage of mempool by checking if number of allocated objects matches
> + * expectations. The mempool is used to manage objects for sessions priv data.
> + * A single object is acquired from mempool during session_create
> + * and put back in session_destroy.
> + *
> + * @param   expected_priv_mp_usage	expected number of used priv mp objects
> + */
> +#define TEST_ASSERT_PRIV_MP_USAGE(expected_priv_mp_usage) do {		\
> +	struct security_testsuite_params *ts_params = &testsuite_params;\
> +	unsigned int priv_mp_usage;					\
> +	priv_mp_usage = rte_mempool_in_use_count(			\
> +			ts_params->session_priv_mpool);			\
> +	TEST_ASSERT_EQUAL(expected_priv_mp_usage, priv_mp_usage,	\
> +			"Expecting %u priv mempool allocations, "		\
one tab less
> +			"but there are %u allocated objects",		\
> +			expected_priv_mp_usage, priv_mp_usage);		\
> +} while (0)
>   
>   /**
>    * Mockup structures and functions for rte_security_ops;
> @@ -237,27 +255,38 @@ static struct mock_session_create_data {
>   	struct rte_security_session_conf *conf;
>   	struct rte_security_session *sess;
>   	struct rte_mempool *mp;
> +	struct rte_mempool *priv_mp;
>   
>   	int ret;
>   
>   	int called;
>   	int failed;
> -} mock_session_create_exp = {NULL, NULL, NULL, NULL, 0, 0, 0};
> +} mock_session_create_exp = {NULL, NULL, NULL, NULL, NULL, 0, 0, 0};
>   
>   static int
>   mock_session_create(void *device,
>   		struct rte_security_session_conf *conf,
>   		struct rte_security_session *sess,
> -		struct rte_mempool *mp)
> +		struct rte_mempool *priv_mp)
>   {
> +	void *sess_priv;
> +	int ret;
> +
>   	mock_session_create_exp.called++;
>   
>   	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, device);
>   	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, conf);
> -	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, mp);
> +	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
> +	ret = rte_mempool_get(priv_mp, &sess_priv);
> +	TEST_ASSERT_EQUAL(0, ret,
> +		"priv mempool does not have enough objects");
>   
> +	set_sec_session_private_data(sess, sess_priv);

if op function doesn't return 0, it shouldn't leave also sess_priv set 
in sess.
Maybe put the code for getting sess_priv from mempool and setting it in 
session inside:
if (mock_session_create_exp.ret == 0) {
...
}

>   	mock_session_create_exp.sess = sess;
>   
> +	if (mock_session_create_exp.ret != 0)
> +		rte_mempool_put(priv_mp, sess_priv);
> +
>   	return mock_session_create_exp.ret;
>   }
>   
> @@ -363,8 +392,10 @@ static struct mock_session_destroy_data {
>   static int
>   mock_session_destroy(void *device, struct rte_security_session *sess)
>   {
> -	mock_session_destroy_exp.called++;
> +	void *sess_priv = get_sec_session_private_data(sess);
>   
> +	mock_session_destroy_exp.called++;
> +	rte_mempool_put(rte_mempool_from_obj(sess_priv), sess_priv);
sess_priv should be released only if op function is going to succeed.
You can check that in similar way as you did in create op by checking 
mock_session_destroy_exp.ret
Otherwise testcase test_session_destroy_ops_failure might cause a 
problem because your are putting same object twice into the mempool 
(once in mock_session_destroy and 2nd time in ut_teardown when session 
is destroyed)
>   	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, device);
>   	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, sess);
>   
> @@ -502,6 +533,7 @@ struct rte_security_ops mock_ops = {
>    */
>   static struct security_testsuite_params {
>   	struct rte_mempool *session_mpool;
> +	struct rte_mempool *session_priv_mpool;
>   } testsuite_params = { NULL };
>   
>   /**
> @@ -524,9 +556,11 @@ static struct security_unittest_params {
>   	.sess = NULL,
>   };
>   
> -#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestsMempoolName"
> +#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestMp"
> +#define SECURITY_TEST_PRIV_MEMPOOL_NAME "SecurityTestPrivMp"
>   #define SECURITY_TEST_MEMPOOL_SIZE 15
> -#define SECURITY_TEST_SESSION_OBJECT_SIZE sizeof(struct rte_security_session)
> +#define SECURITY_TEST_SESSION_OBJ_SZ sizeof(struct rte_security_session)
> +#define SECURITY_TEST_SESSION_PRIV_OBJ_SZ 64
>   
>   /**
>    * testsuite_setup initializes whole test suite parameters.
> @@ -540,11 +574,27 @@ testsuite_setup(void)
>   	ts_params->session_mpool = rte_mempool_create(
>   			SECURITY_TEST_MEMPOOL_NAME,
>   			SECURITY_TEST_MEMPOOL_SIZE,
> -			SECURITY_TEST_SESSION_OBJECT_SIZE,
> +			SECURITY_TEST_SESSION_OBJ_SZ,
>   			0, 0, NULL, NULL, NULL, NULL,
>   			SOCKET_ID_ANY, 0);
>   	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
>   			"Cannot create mempool %s\n", rte_strerror(rte_errno));
> +
> +	ts_params->session_priv_mpool = rte_mempool_create(
> +			SECURITY_TEST_PRIV_MEMPOOL_NAME,
> +			SECURITY_TEST_MEMPOOL_SIZE,
> +			SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
> +			0, 0, NULL, NULL, NULL, NULL,
> +			SOCKET_ID_ANY, 0);
> +	if (ts_params->session_priv_mpool == NULL) {
> +		RTE_LOG(ERR, USER1, "TestCase %s() line %d failed (null): "
> +				"Cannot create priv mempool %s\n",
> +				__func__, __LINE__, rte_strerror(rte_errno));
> +		rte_mempool_free(ts_params->session_mpool);
> +		ts_params->session_mpool = NULL;
> +		return TEST_FAILED;
> +	}
> +
>   	return TEST_SUCCESS;
>   }
>   
> @@ -559,6 +609,10 @@ testsuite_teardown(void)
>   		rte_mempool_free(ts_params->session_mpool);
>   		ts_params->session_mpool = NULL;
>   	}
> +	if (ts_params->session_priv_mpool) {
> +		rte_mempool_free(ts_params->session_priv_mpool);
> +		ts_params->session_priv_mpool = NULL;
> +	}
>   }
>   
>   /**
> @@ -656,10 +710,12 @@ ut_setup_with_session(void)
>   	mock_session_create_exp.device = NULL;
>   	mock_session_create_exp.conf = &ut_params->conf;
>   	mock_session_create_exp.mp = ts_params->session_mpool;
> +	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
>   	mock_session_create_exp.ret = 0;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
>   			sess);
>   	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
> @@ -701,11 +757,13 @@ test_session_create_inv_context(void)
>   	struct rte_security_session *sess;
>   
>   	sess = rte_security_session_create(NULL, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
>   	TEST_ASSERT_SESSION_COUNT(0);
>   
>   	return TEST_SUCCESS;
> @@ -725,11 +783,13 @@ test_session_create_inv_context_ops(void)
>   	ut_params->ctx.ops = NULL;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
>   	TEST_ASSERT_SESSION_COUNT(0);
>   
>   	return TEST_SUCCESS;
> @@ -749,11 +809,13 @@ test_session_create_inv_context_ops_fun(void)
>   	ut_params->ctx.ops = &empty_ops;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
>   	TEST_ASSERT_SESSION_COUNT(0);
>   
>   	return TEST_SUCCESS;
> @@ -770,18 +832,21 @@ test_session_create_inv_configuration(void)
>   	struct rte_security_session *sess;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, NULL,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
>   	TEST_ASSERT_SESSION_COUNT(0);
>   
>   	return TEST_SUCCESS;
>   }
>   
>   /**
> - * Test execution of rte_security_session_create with NULL mp parameter
> + * Test execution of rte_security_session_create with NULL session
> + * mempool
>    */
>   static int
>   test_session_create_inv_mempool(void)
> @@ -790,11 +855,35 @@ test_session_create_inv_mempool(void)
>   	struct rte_security_session *sess;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			NULL);
> +			NULL, NULL);
...NULL, ts_params->session_priv_mpool); would be better as it would 
test if making primary mempool NULL is the cause of session_create failure.
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
> +	TEST_ASSERT_SESSION_COUNT(0);
> +
> +	return TEST_SUCCESS;
> +}
> +
> +/**
> + * Test execution of rte_security_session_create with NULL session
> + * priv mempool
> + */
> +static int
> +test_session_create_inv_sess_priv_mempool(void)
> +{
> +	struct security_unittest_params *ut_params = &unittest_params;
> +	struct security_testsuite_params *ts_params = &testsuite_params;
> +	struct rte_security_session *sess;
> +
> +	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> +			ts_params->session_mpool, NULL);
> +	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> +			sess, NULL, "%p");
> +	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> +	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
>   	TEST_ASSERT_SESSION_COUNT(0);
>   
>   	return TEST_SUCCESS;
> @@ -810,6 +899,7 @@ test_session_create_mempool_empty(void)
>   	struct security_testsuite_params *ts_params = &testsuite_params;
>   	struct security_unittest_params *ut_params = &unittest_params;
>   	struct rte_security_session *tmp[SECURITY_TEST_MEMPOOL_SIZE];
> +	void *tmp1[SECURITY_TEST_MEMPOOL_SIZE];
>   	struct rte_security_session *sess;
>   
>   	/* Get all available objects from mempool. */
> @@ -820,21 +910,34 @@ test_session_create_mempool_empty(void)
>   		TEST_ASSERT_EQUAL(0, ret,
>   				"Expect getting %d object from mempool"
>   				" to succeed", i);
> +		ret = rte_mempool_get(ts_params->session_priv_mpool,
> +				(void **)(&tmp1[i]));
> +		TEST_ASSERT_EQUAL(0, ret,
> +				"Expect getting %d object from priv mempool"
> +				" to succeed", i);
>   	}
>   	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
> +	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
> +	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
>   	TEST_ASSERT_SESSION_COUNT(0);
>   
>   	/* Put objects back to the pool. */
> -	for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i)
> -		rte_mempool_put(ts_params->session_mpool, (void *)(tmp[i]));
> +	for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i) {
> +		rte_mempool_put(ts_params->session_mpool,
> +				(void *)(tmp[i]));
> +		rte_mempool_put(ts_params->session_priv_mpool,
> +				(tmp1[i]));
> +	}
>   	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
>   
>   	return TEST_SUCCESS;
>   }
> @@ -853,14 +956,17 @@ test_session_create_ops_failure(void)
>   	mock_session_create_exp.device = NULL;
>   	mock_session_create_exp.conf = &ut_params->conf;
>   	mock_session_create_exp.mp = ts_params->session_mpool;
> +	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
>   	mock_session_create_exp.ret = -1;	/* Return failure status. */
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
>   	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
>   	TEST_ASSERT_SESSION_COUNT(0);
>   
>   	return TEST_SUCCESS;
> @@ -879,10 +985,12 @@ test_session_create_success(void)
>   	mock_session_create_exp.device = NULL;
>   	mock_session_create_exp.conf = &ut_params->conf;
>   	mock_session_create_exp.mp = ts_params->session_mpool;
> +	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
>   	mock_session_create_exp.ret = 0;	/* Return success status. */
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
>   			sess);
>   	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
> @@ -891,6 +999,7 @@ test_session_create_success(void)
>   			sess, mock_session_create_exp.sess);
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	/*
> @@ -1276,6 +1385,7 @@ test_session_destroy_inv_context(void)
>   	struct security_unittest_params *ut_params = &unittest_params;
>   
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	int ret = rte_security_session_destroy(NULL, ut_params->sess);
> @@ -1283,6 +1393,7 @@ test_session_destroy_inv_context(void)
>   			ret, -EINVAL, "%d");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	return TEST_SUCCESS;
> @@ -1299,6 +1410,7 @@ test_session_destroy_inv_context_ops(void)
>   	ut_params->ctx.ops = NULL;
>   
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	int ret = rte_security_session_destroy(&ut_params->ctx,
> @@ -1307,6 +1419,7 @@ test_session_destroy_inv_context_ops(void)
>   			ret, -EINVAL, "%d");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	return TEST_SUCCESS;
> @@ -1323,6 +1436,7 @@ test_session_destroy_inv_context_ops_fun(void)
>   	ut_params->ctx.ops = &empty_ops;
>   
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	int ret = rte_security_session_destroy(&ut_params->ctx,
> @@ -1331,6 +1445,7 @@ test_session_destroy_inv_context_ops_fun(void)
>   			ret, -ENOTSUP, "%d");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	return TEST_SUCCESS;
> @@ -1345,6 +1460,7 @@ test_session_destroy_inv_session(void)
>   	struct security_unittest_params *ut_params = &unittest_params;
>   
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	int ret = rte_security_session_destroy(&ut_params->ctx, NULL);
> @@ -1352,6 +1468,7 @@ test_session_destroy_inv_session(void)
>   			ret, -EINVAL, "%d");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	return TEST_SUCCESS;
> @@ -1371,6 +1488,7 @@ test_session_destroy_ops_failure(void)
>   	mock_session_destroy_exp.ret = -1;
>   
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	int ret = rte_security_session_destroy(&ut_params->ctx,
You can add also:
     TEST_ASSERT_PRIV_MP_USAGE(1);
in line 1500 after rte_security_session_destroy() returned to verify 
that private mempool usage stays on same level after failure of destroy op.
Currently adding it without fixing mock of session_destroy op, will 
cause test failure:

EAL: Test assert test_session_destroy_ops_failure line 1500 failed: 
Expecting 1 priv mempool allocations, but there are 0 allocated objects
EAL: in ../app/test/test_security.c:1500 test_session_destroy_ops_failure

> @@ -1396,6 +1514,7 @@ test_session_destroy_success(void)
>   	mock_session_destroy_exp.sess = ut_params->sess;
>   	mock_session_destroy_exp.ret = 0;
>   	TEST_ASSERT_MEMPOOL_USAGE(1);
> +	TEST_ASSERT_PRIV_MP_USAGE(1);
>   	TEST_ASSERT_SESSION_COUNT(1);
>   
>   	int ret = rte_security_session_destroy(&ut_params->ctx,
> @@ -1404,6 +1523,7 @@ test_session_destroy_success(void)
>   			ret, 0, "%d");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 1);
>   	TEST_ASSERT_MEMPOOL_USAGE(0);
> +	TEST_ASSERT_PRIV_MP_USAGE(0);
>   	TEST_ASSERT_SESSION_COUNT(0);
>   
>   	/*
> @@ -2370,6 +2490,8 @@ static struct unit_test_suite security_testsuite  = {
>   				test_session_create_inv_configuration),
>   		TEST_CASE_ST(ut_setup, ut_teardown,
>   				test_session_create_inv_mempool),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +				test_session_create_inv_sess_priv_mempool),
>   		TEST_CASE_ST(ut_setup, ut_teardown,
>   				test_session_create_mempool_empty),
>   		TEST_CASE_ST(ut_setup, ut_teardown,
> diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
> index 127da2e4f..d30a79576 100644
> --- a/doc/guides/prog_guide/rte_security.rst
> +++ b/doc/guides/prog_guide/rte_security.rst
> @@ -533,8 +533,12 @@ and this allows further acceleration of the offload of Crypto workloads.
>   
>   The Security framework provides APIs to create and free sessions for crypto/ethernet
>   devices, where sessions are mempool objects. It is the application's responsibility
> -to create and manage the session mempools. The mempool object size should be able to
> -accommodate the driver's private data of security session.
> +to create and manage two session mempools - one for session and other for session
> +private data. The private session data mempool object size should be able to
> +accommodate the driver's private data of security session. The application can get
> +the size of session private data using API ``rte_security_session_get_size``.
> +And the session mempool object size should be enough to accommodate
> +``rte_security_session``.
>   
>   Once the session mempools have been created, ``rte_security_session_create()``
>   is used to allocate and initialize a session for the required crypto/ethernet device.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 43cdd3c58..26be1b3de 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -164,13 +164,6 @@ Deprecation Notices
>     following the IPv6 header, as proposed in RFC
>     https://protect2.fireeye.com/v1/url?k=0ff8f153-529fb575-0ff97a1c-0cc47a31384a-da56d065c0f960ba&q=1&e=4b8cafbf-ec0f-4a52-9c77-e1c5a4efcfc5&u=https%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2020-August%2F177257.html.
>   
> -* security: The API ``rte_security_session_create`` takes only single mempool
> -  for session and session private data. So the application need to create
> -  mempool for twice the number of sessions needed and will also lead to
> -  wastage of memory as session private data need more memory compared to session.
> -  Hence the API will be modified to take two mempool pointers - one for session
> -  and one for private data.
> -
>   * cryptodev: support for using IV with all sizes is added, J0 still can
>     be used but only when IV length in following structs ``rte_crypto_auth_xform``,
>     ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index f1b9b4dfe..0fb1b20cb 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -344,6 +344,12 @@ API Changes
>   * The structure ``rte_crypto_sym_vec`` is updated to support both
>     cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
>   
> +* security: The API ``rte_security_session_create`` is updated to take two
> +  mempool objects one for session and other for session private data.
> +  So the application need to create two mempools and get the size of session
> +  private data using API ``rte_security_session_get_size`` for private session
> +  mempool.
> +
>   
>   ABI Changes
>   -----------
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> index 60132c4bd..2326089bb 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -2348,12 +2348,8 @@ session_pool_init(struct socket_ctx *ctx, int32_t socket_id, size_t sess_sz)
>   
>   	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
>   			"sess_mp_%u", socket_id);
> -	/*
> -	 * Doubled due to rte_security_session_create() uses one mempool for
> -	 * session and for session private data.
> -	 */
>   	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> -		rte_lcore_count()) * 2;
> +		rte_lcore_count());
>   	sess_mp = rte_cryptodev_sym_session_pool_create(
>   			mp_name, nb_sess, sess_sz, CDEV_MP_CACHE_SZ, 0,
>   			socket_id);
> @@ -2376,12 +2372,8 @@ session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
>   
>   	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
>   			"sess_mp_priv_%u", socket_id);
> -	/*
> -	 * Doubled due to rte_security_session_create() uses one mempool for
> -	 * session and for session private data.
> -	 */
>   	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> -		rte_lcore_count()) * 2;
> +		rte_lcore_count());
>   	sess_mp = rte_mempool_create(mp_name,
>   			nb_sess,
>   			sess_sz,
> diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
> index 01faa7ac7..6baeeb342 100644
> --- a/examples/ipsec-secgw/ipsec.c
> +++ b/examples/ipsec-secgw/ipsec.c
> @@ -117,7 +117,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
>   			set_ipsec_conf(sa, &(sess_conf.ipsec));
>   
>   			ips->security.ses = rte_security_session_create(ctx,
> -					&sess_conf, ipsec_ctx->session_priv_pool);
> +					&sess_conf, ipsec_ctx->session_pool,
> +					ipsec_ctx->session_priv_pool);
>   			if (ips->security.ses == NULL) {
>   				RTE_LOG(ERR, IPSEC,
>   				"SEC Session init failed: err: %d\n", ret);
> @@ -198,7 +199,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
>   		}
>   
>   		ips->security.ses = rte_security_session_create(sec_ctx,
> -				&sess_conf, skt_ctx->session_pool);
> +				&sess_conf, skt_ctx->session_pool,
> +				skt_ctx->session_priv_pool);
>   		if (ips->security.ses == NULL) {
>   			RTE_LOG(ERR, IPSEC,
>   				"SEC Session init failed: err: %d\n", ret);
> @@ -378,7 +380,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
>   		sess_conf.userdata = (void *) sa;
>   
>   		ips->security.ses = rte_security_session_create(sec_ctx,
> -					&sess_conf, skt_ctx->session_pool);
> +					&sess_conf, skt_ctx->session_pool,
> +					skt_ctx->session_priv_pool);
>   		if (ips->security.ses == NULL) {
>   			RTE_LOG(ERR, IPSEC,
>   				"SEC Session init failed: err: %d\n", ret);
> diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
> index 515c29e04..ee4666026 100644
> --- a/lib/librte_security/rte_security.c
> +++ b/lib/librte_security/rte_security.c
> @@ -26,18 +26,21 @@
>   struct rte_security_session *
>   rte_security_session_create(struct rte_security_ctx *instance,
>   			    struct rte_security_session_conf *conf,
> -			    struct rte_mempool *mp)
> +			    struct rte_mempool *mp,
> +			    struct rte_mempool *priv_mp)
>   {
>   	struct rte_security_session *sess = NULL;
>   
>   	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_create, NULL, NULL);
>   	RTE_PTR_OR_ERR_RET(conf, NULL);
>   	RTE_PTR_OR_ERR_RET(mp, NULL);
> +	RTE_PTR_OR_ERR_RET(priv_mp, NULL);
>   
>   	if (rte_mempool_get(mp, (void **)&sess))
>   		return NULL;
>   
> -	if (instance->ops->session_create(instance->device, conf, sess, mp)) {
> +	if (instance->ops->session_create(instance->device, conf,
> +				sess, priv_mp)) {
>   		rte_mempool_put(mp, (void *)sess);
>   		return NULL;
>   	}
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 16839e539..1710cdd6a 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -386,6 +386,7 @@ struct rte_security_session {
>    * @param   instance	security instance
>    * @param   conf	session configuration parameters
>    * @param   mp		mempool to allocate session objects from
> + * @param   priv_mp	mempool to allocate session private data objects from
>    * @return
>    *  - On success, pointer to session
>    *  - On failure, NULL
> @@ -393,7 +394,8 @@ struct rte_security_session {
>   struct rte_security_session *
>   rte_security_session_create(struct rte_security_ctx *instance,
>   			    struct rte_security_session_conf *conf,
> -			    struct rte_mempool *mp);
> +			    struct rte_mempool *mp,
> +			    struct rte_mempool *priv_mp);
>   
>   /**
>    * Update security session as specified by the session configuration

-- 
Lukasz Wojciechowski
Principal Software Engineer

Samsung R&D Institute Poland
Samsung Electronics
Office +48 22 377 88 25
l.wojciechow@partner.samsung.com


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver
  @ 2020-10-15  0:55  3%         ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-10-15  0:55 UTC (permalink / raw)
  To: akhil.goyal, Raveendra Padasalagi, Vikas Gupta, ajit.khaparde
  Cc: dev, vikram.prakash, mdr

07/10/2020 19:18, Vikas Gupta:
> --- /dev/null
> +++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
> @@ -0,0 +1,3 @@
> +DPDK_21.0 {
> +	local: *;
> +};

No!
Please be careful, all other libs use ABI DPDK_21.

Will fix




^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  2020-10-14 13:10  0%                         ` Medvedkin, Vladimir
@ 2020-10-14 23:57  0%                           ` Honnappa Nagarahalli
  0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2020-10-14 23:57 UTC (permalink / raw)
  To: Medvedkin, Vladimir, Michel Machado, Kevin Traynor, Ruifeng Wang,
	Bruce Richardson, Cody Doucette, Andre Nathan, Qiaobin Fu
  Cc: dev, nd, Honnappa Nagarahalli, nd

<snip>

> >>
> >>
> >> On 13/10/2020 18:46, Michel Machado wrote:
> >>> On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
> >>>> Hi Michel,
> >>>>
> >>>> Could you please describe a condition when LPM gets inconsistent?
> >>>> As I can see if there is no free tbl8 it will return -ENOSPC.
> >>>
> >>>     Consider this simple example, we need to add the following two
> >>> prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If
> >>> the LPM table is out of tbl8s, the second prefix is not added and
> >>> Gatekeeper will make decisions in violation of the policy. The data
> >>> structure of the LPM table is consistent, but its content
> >>> inconsistent with the policy.
max_rules and number_tbl8s in 'struct rte_lpm' contain the config information. These 2 fields do not change based on the routes added and do not indicate the amount of space left. So, you cannot use this information to decide if there is enough space to add more routes.

> >>
> >> Aha, thanks. So do I understand correctly that you need to add a set
> >> of routes atomically (either the entire set is installed or nothing)?
> >
> >     Yes.
> >
> >> If so, then I would suggest having 2 lpm and switching them
> >> atomically after a successful addition. As for now, even if you have
> >> enough tbl8's, routes are installed non atomically, i.e. there will
> >> be a time gap between adding two routes, so in this time interval the
> >> table will be inconsistent with the policy.
> >> Also, if new lpm algorithms are added to the DPDK, they won't have
> >> such a thing as tbl8.
> >
> >     Our code already deals with synchronization.
If the application code already deals with synchronization, is it possible to revert back (i.e. delete the routes that got added so far) when the addition of the route-set fails?

> 
> OK, so my suggestion here would be to add new routes to the shadow copy
> of the lpm, and if it returns -ENOSPC, than create a new LPM with double
> amount of tbl8's and add all the routes to it. Then switch the active-shadow
> LPM pointers. In this case you'll always add a bulk of routes atomically.
> 
> >
> >>>     We minimize the need of replacing a LPM table by allocating LPM
> >>> tables with the double of what we need (see example here
> >>>
> https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698
> >>> bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183),
> >>> but the code must be ready for unexpected needs that may arise in
> >>> production.
> >>>
> >>
> >> Usually, the table is initialized with a large enough number of
> >> entries, enough to add a possible number of routes. One tbl8 group
> >> takes up 1Kb of memory which is nothing comparing to the size of
> >> tbl24 which is 64Mb.
> >
> >     When the prefixes come from BGP, initializing a large enough table
> > is fine. But when prefixes come from threat intelligence, the number
> > of prefixes can vary wildly and the number of prefixes above 24 bits
> > are way more common.
> >
> >> P.S. consider using rte_fib library, it has a number of advantages
> >> over LPM. You can replace the loop in __lookup_fib_bulk() with a bulk
> >> lookup call and this will probably increase the speed.
> >
> >     I'm not aware of the rte_fib library. The only documentation that
> > I found on Google was https://doc.dpdk.org/api/rte__fib_8h.html and it
> > just says "FIB (Forwarding information base) implementation for IPv4
> > Longest Prefix Match".
> 
> That's true, I'm going to add programmer's guide soon.
> Although the fib API is very similar to existing LPM.
> 
> >
> >>>>
> >>>> On 13/10/2020 15:58, Michel Machado wrote:
> >>>>> Hi Kevin,
> >>>>>
> >>>>>     We do need fields max_rules and number_tbl8s of struct
> >>>>> rte_lpm, so the removal would force us to have another patch to
> >>>>> our local copy of DPDK. We'd rather avoid this new local patch
> >>>>> because we wish to eventually be in sync with the stock DPDK.
> >>>>>
> >>>>>     Those fields are needed in Gatekeeper because we found a
> >>>>> condition in an ongoing deployment in which the entries of some
> >>>>> LPM tables may suddenly change a lot to reflect policy changes. To
> >>>>> avoid getting into a state in which the LPM table is inconsistent
> >>>>> because it cannot fit all the new entries, we compute the needed
> >>>>> parameters to support the new entries, and compare with the
> >>>>> current parameters. If the current table doesn't fit everything,
> >>>>> we have to replace it with a new LPM table.
> >>>>>
> >>>>>     If there were a way to obtain the struct rte_lpm_config of a
> >>>>> given LPM table, it would cleanly address our need. We have the
> >>>>> same need in IPv6 and have a local patch to work around it (see
> >>>>>
> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db
> 26a78115cb8c8f).
I do not see why such an API is not possible, we could add one API that returns max_rules and number_tbl8s (essentially, the config that was passed to rte_lpm_create API).
But, is there a possibility to store that info in the application as that data was passed to rte_lpm from the application?

> >>>>> Thus, an IPv4 and IPv6 solution would be best.
> >>>>>
> >>>>>     PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to
> >>>>> this disscussion.
> >>>>>
> >>>>> [ ]'s
> >>>>> Michel Machado
> >>>>>
> >>>>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
> >>>>>> Hi Gatekeeper maintainers (I think),
> >>>>>>
> >>>>>> fyi - there is a proposal to remove some members of a struct in
> >>>>>> DPDK LPM API that Gatekeeper is using [1]. It would be only from
> >>>>>> DPDK 20.11 but as it's an LTS I guess it would probably hit
> >>>>>> Debian in a few months.
> >>>>>>
> >>>>>> The full thread is here:
> >>>>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-
> ruifeng.wang@arm
> >>>>>> .com/
> >>>>>>
> >>>>>>
> >>>>>> Maybe you can take a look and tell us if they are needed in
> >>>>>> Gatekeeper or you can workaround it?
> >>>>>>
> >>>>>> thanks,
> >>>>>> Kevin.
> >>>>>>
> >>>>>> [1]
> >>>>>>
> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c
> >>>>>> #L235-L248
> >>>>>>
> >>>>>>
> >>>>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Kevin Traynor <ktraynor@redhat.com>
> >>>>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
> >>>>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin,
> Vladimir
> >>>>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
> >>>>>>>> <bruce.richardson@intel.com>
> >>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
> >>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> >>>>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
> >>>>>>>>
> >>>>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
> >>>>>>>>>
> >>>>>>>>>> -----Original Message-----
> >>>>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> >>>>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
> >>>>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng
> >>>>>>>>>> Wang <Ruifeng.Wang@arm.com>
> >>>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
> >>>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> >>>>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
> >>>>>>>>>>
> >>>>>>>>>> Hi Ruifeng,
> >>>>>>>>>>
> >>>>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
> >>>>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang
> wrote:
> >>>>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no
> >>>>>>>>>>>> need to be exposed to the user.
> >>>>>>>>>>>> Hide the unneeded exposure of structure fields for better
> >>>>>>>>>>>> ABI maintainability.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Suggested-by: David Marchand
> <david.marchand@redhat.com>
> >>>>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> >>>>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
> >>>>>>>>>>>> ---
> >>>>>>>>>>>>    lib/librte_lpm/rte_lpm.c | 152
> >>>>>>>>>>>> +++++++++++++++++++++++---------------
> >>>>>>>>>> -
> >>>>>>>>>>>>    lib/librte_lpm/rte_lpm.h |   7 --
> >>>>>>>>>>>>    2 files changed, 91 insertions(+), 68 deletions(-)
> >>>>>>>>>>>>
> >>>>>>>>>>> <snip>
> >>>>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h
> >>>>>>>>>>>> b/lib/librte_lpm/rte_lpm.h index 03da2d37e..112d96f37
> >>>>>>>>>>>> 100644
> >>>>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
> >>>>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
> >>>>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
> >>>>>>>>>>>>
> >>>>>>>>>>>>    /** @internal LPM structure. */
> >>>>>>>>>>>>    struct rte_lpm {
> >>>>>>>>>>>> -    /* LPM metadata. */
> >>>>>>>>>>>> -    char name[RTE_LPM_NAMESIZE];        /**< Name of the
> >>>>>>>>>>>> lpm. */
> >>>>>>>>>>>> -    uint32_t max_rules; /**< Max. balanced rules per lpm.
> >>>>>>>>>>>> */
> >>>>>>>>>>>> -    uint32_t number_tbl8s; /**< Number of tbl8s. */
> >>>>>>>>>>>> -    struct rte_lpm_rule_info
> rule_info[RTE_LPM_MAX_DEPTH];
> >>>>>>>>>>>> /**<
> >>>>>>>>>> Rule info table. */
> >>>>>>>>>>>> -
> >>>>>>>>>>>>        /* LPM Tables. */
> >>>>>>>>>>>>        struct rte_lpm_tbl_entry
> >>>>>>>>>>>> tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
> >>>>>>>>>>>>                __rte_cache_aligned; /**< LPM tbl24 table.
> >>>>>>>>>>>> */
> >>>>>>>>>>>>        struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table.
> >>>>>>>>>>>> */
> >>>>>>>>>>>> -    struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> >>>>>>>>>>>>    };
> >>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Since this changes the ABI, does it not need advance notice?
> >>>>>>>>>>>
> >>>>>>>>>>> [Basically the return value point from rte_lpm_create() will
> >>>>>>>>>>> be different, and that return value could be used by
> >>>>>>>>>>> rte_lpm_lookup()
> >>>>>>>>>>> which as a static inline function will be in the binary and
> >>>>>>>>>>> using the old structure offsets.]
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be
> >>>>>>>>>> accepted without prior notice.
> >>>>>>>>>>
> >>>>>>>>> So if the change wants to happen in 20.11, a deprecation
> >>>>>>>>> notice should have been added in 20.08.
> >>>>>>>>> I should have added a deprecation notice. This change will
> >>>>>>>>> have to wait for
> >>>>>>>> next ABI update window.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> Do you plan to extend? or is this just speculative?
> >>>>>>> It is speculative.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> A quick scan and there seems to be several projects using some
> >>>>>>>> of these
> >>>>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
> >>>>>>>> gatekeeper. I didn't look at the details to see if they are
> >>>>>>>> really needed.
> >>>>>>>>
> >>>>>>>> Not sure how much notice they'd need or if they update DPDK
> >>>>>>>> much, but I
> >>>>>>>> think it's worth having a closer look as to how they use lpm and
> >>>>>>>> what the
> >>>>>>>> impact to them is.
> >>>>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't
> >>>>>>> access the members to be hided.
> >>>>>>> They will not be impacted by this patch.
> >>>>>>> But Gatekeeper accesses the rte_lpm internal members that to be
> >>>>>>> hided. Its compilation will be broken with this patch.
> >>>>>>>
> >>>>>>>>
> >>>>>>>>> Thanks.
> >>>>>>>>> Ruifeng
> >>>>>>>>>>>>    /** LPM RCU QSBR configuration structure. */
> >>>>>>>>>>>> --
> >>>>>>>>>>>> 2.17.1
> >>>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> --
> >>>>>>>>>> Regards,
> >>>>>>>>>> Vladimir
> >>>>>>>
> >>>>>>
> >>>>
> >>
> 
> --
> Regards,
> Vladimir

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints
  2020-10-14 21:36  9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
@ 2020-10-14 21:36  2%   ` Timothy McDaniel
  2020-10-14 21:36  6%   ` [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
  2020-10-15 14:26  7%   ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Jerin Jacob
  2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-14 21:36 UTC (permalink / raw)
  To: Hemant Agrawal, Nipun Gupta, Mattias Rönnblom, Jerin Jacob,
	Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren,
	Nikhil Rao, Ray Kinsella, Neil Horman
  Cc: dev, erik.g.carrillo, gage.eads

This commit implements the eventdev ABI changes required by
the DLB PMD.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dpaa/dpaa_eventdev.c             |  3 +-
 drivers/event/dpaa2/dpaa2_eventdev.c           |  5 +-
 drivers/event/dsw/dsw_evdev.c                  |  3 +-
 drivers/event/octeontx/ssovf_evdev.c           |  5 +-
 drivers/event/octeontx2/otx2_evdev.c           |  3 +-
 drivers/event/opdl/opdl_evdev.c                |  3 +-
 drivers/event/skeleton/skeleton_eventdev.c     |  5 +-
 drivers/event/sw/sw_evdev.c                    |  8 ++--
 drivers/event/sw/sw_evdev_selftest.c           |  6 +--
 lib/librte_eventdev/rte_event_eth_tx_adapter.c |  2 +-
 lib/librte_eventdev/rte_eventdev.c             | 66 +++++++++++++++++++++++---
 lib/librte_eventdev/rte_eventdev.h             | 51 ++++++++++++++++----
 lib/librte_eventdev/rte_eventdev_pmd_pci.h     |  1 -
 lib/librte_eventdev/rte_eventdev_trace.h       |  7 +--
 lib/librte_eventdev/rte_eventdev_version.map   |  4 +-
 15 files changed, 134 insertions(+), 38 deletions(-)

diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index b5ae87a..07cd079 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
 		RTE_EVENT_DEV_CAP_BURST_MODE |
 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-		RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 3ae4441..712db6c 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
-		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
+		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 
 }
 
@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 		DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
 	port_conf->enqueue_depth =
 		DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index e796975..933a5a5 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
 		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE|
-		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
+		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
 	};
 }
 
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 4fc4e8f..1c6bcca 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
 					RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
 					RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 					RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-					RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 
 }
 
@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = edev->max_num_events;
 	port_conf->dequeue_depth = 1;
 	port_conf->enqueue_depth = 1;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static void
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index b8b57c3..ae35bb5 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
 					RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
 					RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 					RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-					RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static void
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9b2f75f..3050578 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 		.max_event_port_dequeue_depth = MAX_OPDL_CONS_Q_DEPTH,
 		.max_event_port_enqueue_depth = MAX_OPDL_CONS_Q_DEPTH,
 		.max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
-		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE,
+		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
+				 RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
 	};
 
 	*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index c889220..6fd1102 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
 	dev_info->max_num_events = (1ULL << 20);
 	dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
 					RTE_EVENT_DEV_CAP_BURST_MODE |
-					RTE_EVENT_DEV_CAP_EVENT_QOS;
+					RTE_EVENT_DEV_CAP_EVENT_QOS |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static int
@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = 32 * 1024;
 	port_conf->dequeue_depth = 16;
 	port_conf->enqueue_depth = 16;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static void
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 98dae71..058f568 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
 	}
 
 	p->inflight_max = conf->new_event_threshold;
-	p->implicit_release = !conf->disable_implicit_release;
+	p->implicit_release = !(conf->event_port_cfg &
+				RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 
 	/* check if ring exists, same as rx_worker above */
 	snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
@@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = 1024;
 	port_conf->dequeue_depth = 16;
 	port_conf->enqueue_depth = 16;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static int
@@ -615,7 +616,8 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 				RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
 				RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 				RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-				RTE_EVENT_DEV_CAP_NONSEQ_MODE),
+				RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+				RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
 	};
 
 	*info = evdev_sw_info;
diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
index 38c21fa..4a7d823 100644
--- a/drivers/event/sw/sw_evdev_selftest.c
+++ b/drivers/event/sw/sw_evdev_selftest.c
@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
 			.new_event_threshold = 1024,
 			.dequeue_depth = 32,
 			.enqueue_depth = 64,
-			.disable_implicit_release = 0,
 	};
 	if (num_ports > MAX_PORTS)
 		return -1;
@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
 				.new_event_threshold = 128,
 				.dequeue_depth = 32,
 				.enqueue_depth = 64,
-				.disable_implicit_release = 0,
 		};
 		if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
 			printf("%d Error setting up port\n", __LINE__);
@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
 		.new_event_threshold = 128,
 		.dequeue_depth = 32,
 		.enqueue_depth = 64,
-		.disable_implicit_release = 0,
 	};
 	if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
 		printf("%d Error setting up port\n", __LINE__);
@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
 	 * only be initialized once - and this needs to be set for multiple runs
 	 */
 	conf.new_event_threshold = 512;
-	conf.disable_implicit_release = disable_implicit_release;
+	conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (rte_event_port_setup(evdev, 0, &conf) < 0) {
 		printf("Error setting up RX port\n");
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
index bb21dc4..8a72256 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
 		return ret;
 	}
 
-	pc->disable_implicit_release = 0;
+	pc->event_port_cfg = 0;
 	ret = rte_event_port_setup(dev_id, port_id, pc);
 	if (ret) {
 		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 82c177c..3a5b738 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -32,6 +32,7 @@
 #include <rte_ethdev.h>
 #include <rte_cryptodev.h>
 #include <rte_cryptodev_pmd.h>
+#include <rte_compat.h>
 
 #include "rte_eventdev.h"
 #include "rte_eventdev_pmd.h"
@@ -437,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
 					dev_id);
 		return -EINVAL;
 	}
-	if (dev_conf->nb_event_queues > info.max_event_queues) {
-		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
-		dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+	if (dev_conf->nb_event_queues > info.max_event_queues +
+			info.max_single_link_event_port_queue_pairs) {
+		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
+				 dev_id, dev_conf->nb_event_queues,
+				 info.max_event_queues,
+				 info.max_single_link_event_port_queue_pairs);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queues -
+			dev_conf->nb_single_link_event_port_queues >
+			info.max_event_queues) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
+				 dev_id, dev_conf->nb_event_queues,
+				 dev_conf->nb_single_link_event_port_queues,
+				 info.max_event_queues);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_single_link_event_port_queues >
+			dev_conf->nb_event_queues) {
+		RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
+				 dev_id,
+				 dev_conf->nb_single_link_event_port_queues,
+				 dev_conf->nb_event_queues);
 		return -EINVAL;
 	}
 
@@ -448,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
 		RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
 		return -EINVAL;
 	}
-	if (dev_conf->nb_event_ports > info.max_event_ports) {
-		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
-		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+	if (dev_conf->nb_event_ports > info.max_event_ports +
+			info.max_single_link_event_port_queue_pairs) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
+				 dev_id, dev_conf->nb_event_ports,
+				 info.max_event_ports,
+				 info.max_single_link_event_port_queue_pairs);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_ports -
+			dev_conf->nb_single_link_event_port_queues
+			> info.max_event_ports) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
+				 dev_id, dev_conf->nb_event_ports,
+				 dev_conf->nb_single_link_event_port_queues,
+				 info.max_event_ports);
+		return -EINVAL;
+	}
+
+	if (dev_conf->nb_single_link_event_port_queues >
+	    dev_conf->nb_event_ports) {
+		RTE_EDEV_LOG_ERR(
+				 "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
+				 dev_id,
+				 dev_conf->nb_single_link_event_port_queues,
+				 dev_conf->nb_event_ports);
 		return -EINVAL;
 	}
 
@@ -737,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
 		return -EINVAL;
 	}
 
-	if (port_conf && port_conf->disable_implicit_release &&
+	if (port_conf &&
+	    (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
 	    !(dev->data->event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		RTE_EDEV_LOG_ERR(
@@ -830,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 	case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
 		*attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
 		break;
+	case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
+	{
+		uint32_t config;
+
+		config = dev->data->ports_cfg[port_id].event_port_cfg;
+		*attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
+		break;
+	}
 	default:
 		return -EINVAL;
 	};
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 7dc8323..ce1fc2c 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -291,6 +291,12 @@ struct rte_event;
  * single queue to each port or map a single queue to many port.
  */
 
+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
+/**< Event device preserves the flow ID from the enqueued
+ * event to the dequeued event if the flag is set. Otherwise,
+ * the content of this field is implementation dependent.
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -380,6 +386,10 @@ struct rte_event_dev_info {
 	 * event port by this device.
 	 * A device that does not support bulk enqueue will set this as 1.
 	 */
+	uint8_t max_event_port_links;
+	/**< Maximum number of queues that can be linked to a single event
+	 * port by this device.
+	 */
 	int32_t max_num_events;
 	/**< A *closed system* event dev has a limit on the number of events it
 	 * can manage at a time. An *open system* event dev does not have a
@@ -387,6 +397,12 @@ struct rte_event_dev_info {
 	 */
 	uint32_t event_dev_cap;
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	uint8_t max_single_link_event_port_queue_pairs;
+	/**< Maximum number of event ports and queues that are optimized for
+	 * (and only capable of) single-link configurations supported by this
+	 * device. These ports and queues are not accounted for in
+	 * max_event_ports or max_event_queues.
+	 */
 };
 
 /**
@@ -494,6 +510,14 @@ struct rte_event_dev_config {
 	 */
 	uint32_t event_dev_cfg;
 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+	uint8_t nb_single_link_event_port_queues;
+	/**< Number of event ports and queues that will be singly-linked to
+	 * each other. These are a subset of the overall event ports and
+	 * queues; this value cannot exceed *nb_event_ports* or
+	 * *nb_event_queues*. If the device has ports and queues that are
+	 * optimized for single-link usage, this field is a hint for how many
+	 * to allocate; otherwise, regular event ports and queues can be used.
+	 */
 };
 
 /**
@@ -519,7 +543,6 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf);
 
-
 /* Event queue specific APIs */
 
 /* Event queue configuration bitmap flags */
@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 
 /* Event port specific APIs */
 
+/* Event port configuration bitmap flags */
+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
+/**< Configure the port not to release outstanding events in
+ * rte_event_dev_dequeue_burst(). If set, all events received through
+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
+ */
+#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
+/**< This event port links only to a single event queue.
+ *
+ *  @see rte_event_port_setup(), rte_event_port_link()
+ */
+
 /** Event port configuration structure */
 struct rte_event_port_conf {
 	int32_t new_event_threshold;
@@ -698,13 +735,7 @@ struct rte_event_port_conf {
 	 * which previously supplied to rte_event_dev_configure().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
 	 */
-	uint8_t disable_implicit_release;
-	/**< Configure the port not to release outstanding events in
-	 * rte_event_dev_dequeue_burst(). If true, all events received through
-	 * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
-	 * RTE_EVENT_OP_FORWARD. Must be false when the device is not
-	 * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
-	 */
+	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
 };
 
 /**
@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
  * The new event threshold of the port
  */
 #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
+/**
+ * The implicit release disable attribute of the port
+ */
+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
 
 /**
  * Get an attribute from a port.
diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
index 443cd38..a3f9244 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
 	return -ENXIO;
 }
 
-
 /**
  * @internal
  * Wrapper for use by pci drivers as a .remove function to detach a event
diff --git a/lib/librte_eventdev/rte_eventdev_trace.h b/lib/librte_eventdev/rte_eventdev_trace.h
index 4de6341..5ec43d8 100644
--- a/lib/librte_eventdev/rte_eventdev_trace.h
+++ b/lib/librte_eventdev/rte_eventdev_trace.h
@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_u32(dev_conf->nb_event_port_dequeue_depth);
 	rte_trace_point_emit_u32(dev_conf->nb_event_port_enqueue_depth);
 	rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
+	rte_trace_point_emit_u8(dev_conf->nb_single_link_event_port_queues);
 	rte_trace_point_emit_int(rc);
 )
 
@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 	rte_trace_point_emit_int(rc);
 )
 
@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 	rte_trace_point_emit_ptr(conf_cb);
 	rte_trace_point_emit_int(rc);
 )
@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 )
 
 RTE_TRACE_POINT(
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3d9d0ca..2846d04 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -100,7 +100,6 @@ EXPERIMENTAL {
 	# added in 20.05
 	__rte_eventdev_trace_configure;
 	__rte_eventdev_trace_queue_setup;
-	__rte_eventdev_trace_port_setup;
 	__rte_eventdev_trace_port_link;
 	__rte_eventdev_trace_port_unlink;
 	__rte_eventdev_trace_start;
@@ -134,4 +133,7 @@ EXPERIMENTAL {
 	__rte_eventdev_trace_crypto_adapter_queue_pair_del;
 	__rte_eventdev_trace_crypto_adapter_start;
 	__rte_eventdev_trace_crypto_adapter_stop;
+
+	# changed in 20.11
+	__rte_eventdev_trace_port_setup;
 };
-- 
2.6.4


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI
  2020-10-14 21:36  9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
  2020-10-14 21:36  2%   ` [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
@ 2020-10-14 21:36  6%   ` Timothy McDaniel
  2020-10-15 14:26  7%   ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Jerin Jacob
  2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-14 21:36 UTC (permalink / raw)
  To: Jerin Jacob, Harry van Haaren, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, erik.g.carrillo, gage.eads, hemant.agrawal

Several data structures and constants changed, or were added,
in the previous patch.  This commit updates the dependent
apps and examples to use the new ABI.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
 app/test-eventdev/evt_common.h                     | 11 ++++++++
 app/test-eventdev/test_order_atq.c                 | 28 +++++++++++++++------
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 ++++++++++++++++------
 app/test/test_eventdev.c                           |  4 +--
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +++--
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++++--
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +++--
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++++--
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +++--
 11 files changed, 80 insertions(+), 26 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
 			true : false;
 }
 
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+	struct rte_event_dev_info dev_info;
+
+	rte_event_dev_info_get(dev_id, &dev_info);
+	return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+			true : false;
+}
+
 static inline int
 evt_service_setup(uint32_t service_id)
 {
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
 			.dequeue_timeout_ns = opt->deq_tmo_nsec,
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = info.max_num_events,
 			.nb_event_queue_flows = opt->nb_flows,
 			.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.sub_event_type == 0) { /* stage 0 from producer */
 			order_atq_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
 }
 
 static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].sub_event_type == 0) { /*stage 0 */
 				order_atq_process_stage_0(&ev[i]);
 			} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_atq_worker_burst(arg);
-	else
-		return order_atq_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_atq_worker_burst(arg, true);
+		else
+			return order_atq_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_atq_worker(arg, true);
+		else
+			return order_atq_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
 		const uint32_t flow = (uintptr_t)m % nb_flows;
 		/* Maintain seq number per flow */
 		m->seqn = producer_flow_seq[flow]++;
+		m->udata64 = flow;
 
 		ev.flow_id = flow;
 		ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.queue_id == 0) { /* from ordered queue */
 			order_queue_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
 }
 
 static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].queue_id == 0) { /* from ordered queue */
 				order_queue_process_stage_0(&ev[i]);
 			} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_queue_worker_burst(arg);
-	else
-		return order_queue_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_queue_worker_burst(arg, true);
+		else
+			return order_queue_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_queue_worker(arg, true);
+		else
+			return order_queue_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
 	if (!(info.event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		pconf.enqueue_depth = info.max_event_port_enqueue_depth;
-		pconf.disable_implicit_release = 1;
+		pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 		ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
 		TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
-		pconf.disable_implicit_release = 0;
+		pconf.event_port_cfg = 0;
 	}
 
 	ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 1,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 			.schedule_type = cdata.queue_type,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
-		.nb_atomic_order_sequences = 1024,
+			.nb_atomic_order_sequences = 1024,
 	};
 	struct rte_event_queue_conf tx_q_conf = {
 			.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	disable_implicit_release = (dev_info.event_dev_cap &
 			RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
 
-	wkr_p_conf.disable_implicit_release = disable_implicit_release;
+	wkr_p_conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (dev_info.max_num_events < config.nb_events_limit)
 		config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
-- 
2.6.4


^ permalink raw reply	[relevance 6%]

* [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2
  @ 2020-10-14 21:36  9% ` Timothy McDaniel
  2020-10-14 21:36  2%   ` [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
                     ` (2 more replies)
  2020-10-15 17:31  9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
  2020-10-15 18:07  9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
  2 siblings, 3 replies; 200+ results
From: Timothy McDaniel @ 2020-10-14 21:36 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, hemant.agrawal

This series implements the eventdev ABI changes required by
the DLB and DLB2 PMDs. This ABI change was announced in the
20.08 release notes [1]. This patch was initially part of
the V1 DLB PMD patchset.

The DLB hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports are further restricted to a maximum of 1 linked queue.
3) It does not (currently) have the ability to carry the flow_id as part
of the event (QE) payload.

Due to the above, we would like to propose the following enhancements.

1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertise its capabilities so that applications can take
the appropriate actions based on those capabilities.

2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.

3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to another, with the remaining bits available for future
assignment.

Note that it was requested that we split this app/test
changes out from the eventdev ABI patch. As a result,
neither of these patches will build without the other
also being applied.

Major changes since V1:
Reworded commit message, as requested
Fixed errors reported by clang

Testing showed no performance impact due to the flow_id template code
added to test app.

[1] http://mails.dpdk.org/archives/dev/2020-August/177261.html


Timothy McDaniel (2):
  eventdev: eventdev: express DLB/DLB2 PMD constraints
  eventdev: update app and examples for new eventdev ABI



Timothy McDaniel (2):
  eventdev: eventdev: express DLB/DLB2 PMD constraints
  eventdev: update app and examples for new eventdev ABI

 app/test-eventdev/evt_common.h                     | 11 ++++
 app/test-eventdev/test_order_atq.c                 | 28 ++++++---
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 +++++++---
 app/test/test_eventdev.c                           |  4 +-
 drivers/event/dpaa/dpaa_eventdev.c                 |  3 +-
 drivers/event/dpaa2/dpaa2_eventdev.c               |  5 +-
 drivers/event/dsw/dsw_evdev.c                      |  3 +-
 drivers/event/octeontx/ssovf_evdev.c               |  5 +-
 drivers/event/octeontx2/otx2_evdev.c               |  3 +-
 drivers/event/opdl/opdl_evdev.c                    |  3 +-
 drivers/event/skeleton/skeleton_eventdev.c         |  5 +-
 drivers/event/sw/sw_evdev.c                        |  8 ++-
 drivers/event/sw/sw_evdev_selftest.c               |  6 +-
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +-
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++-
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +-
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++-
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +-
 lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
 lib/librte_eventdev/rte_eventdev.c                 | 66 +++++++++++++++++++---
 lib/librte_eventdev/rte_eventdev.h                 | 51 ++++++++++++++---
 lib/librte_eventdev/rte_eventdev_pmd_pci.h         |  1 -
 lib/librte_eventdev/rte_eventdev_trace.h           |  7 ++-
 lib/librte_eventdev/rte_eventdev_version.map       |  4 +-
 26 files changed, 214 insertions(+), 64 deletions(-)

-- 
2.6.4


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v3] eventdev: update app and examples for new eventdev ABI
  2020-10-14 17:33  6% ` [dpdk-dev] [PATCH v3] " Timothy McDaniel
@ 2020-10-14 20:01  4%   ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2020-10-14 20:01 UTC (permalink / raw)
  To: Timothy McDaniel
  Cc: Jerin Jacob, Harry van Haaren, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh, dpdk-dev,
	Erik Gabriel Carrillo, Gage Eads

On Wed, Oct 14, 2020 at 11:01 PM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Several data structures and constants changed, or were added,
> in the previous patch.  This commit updates the dependent
> apps and examples to use the new ABI.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> Acked-by: Pavan Nikhilesh <pbhagavatula at marvell.com>
> Acked-by: Harry van Haaren <harry.van.haaren@intel.com>

Please send both spec and this patches as series. Not this
http://patches.dpdk.org/patch/80782/ alone.
Reason: The spec patch[1] still has apply issue[2]. Please rebase both
patches on top of next-evendev and send a series.

I am changing the patchwork status for the following patches as
"Changes requested"
http://patches.dpdk.org/patch/79715/
http://patches.dpdk.org/patch/79716/
http://patches.dpdk.org/patch/80782/

[1]
http://patches.dpdk.org/patch/79715/
[2]
[for-main]dell[dpdk-next-eventdev] $ date &&
/home/jerin/config/scripts/build_each_patch.sh /tmp/r/ && date
Thu Oct 15 01:23:43 AM IST 2020
HEAD is now at 1d41eebe8 event/sw: performance improvements
meson build test
Applying: eventdev: eventdev: express DLB/DLB2 PMD constraints
Using index info to reconstruct a base tree...
M       drivers/event/dpaa2/dpaa2_eventdev.c
M       drivers/event/octeontx/ssovf_evdev.c
M       drivers/event/octeontx2/otx2_evdev.c
M       drivers/event/sw/sw_evdev.c
M       lib/librte_eventdev/rte_event_eth_tx_adapter.c
M       lib/librte_eventdev/rte_eventdev.c
Falling back to patching base and 3-way merge...
Auto-merging lib/librte_eventdev/rte_eventdev.c
CONFLICT (content): Merge conflict in lib/librte_eventdev/rte_eventdev.c
Auto-merging lib/librte_eventdev/rte_event_eth_tx_adapter.c
Auto-merging drivers/event/sw/sw_evdev.c
Auto-merging drivers/event/octeontx2/otx2_evdev.c
Auto-merging drivers/event/octeontx/ssovf_evdev.c
Auto-merging drivers/event/dpaa2/dpaa2_eventdev.c
Recorded preimage for 'lib/librte_eventdev/rte_eventdev.c'
error: Failed to merge in the changes.
Patch failed at 0001 eventdev: eventdev: express DLB/DLB2 PMD constraints
hint: Use 'git am --show-current-patch=diff' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
git am failed /tmp/r//v2-1-2-eventdev-eventdev-express-DLB-DLB2-PMD-constraints
HEAD is now at 1d41eebe8 event/sw: performance improvements
Thu Oct 15 01:23:43 AM IST 2020






> ---
>  app/test-eventdev/evt_common.h                     | 11 ++++++++
>  app/test-eventdev/test_order_atq.c                 | 28 +++++++++++++++------
>  app/test-eventdev/test_order_common.c              |  1 +
>  app/test-eventdev/test_order_queue.c               | 29 ++++++++++++++++------
>  app/test/test_eventdev.c                           |  4 +--
>  .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +++--
>  examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
>  examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++++--
>  examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +++--
>  examples/l3fwd/l3fwd_event_generic.c               |  7 ++++--
>  examples/l3fwd/l3fwd_event_internal_port.c         |  6 +++--
>  11 files changed, 80 insertions(+), 26 deletions(-)
>
> diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
> index f9d7378..a1da1cf 100644
> --- a/app/test-eventdev/evt_common.h
> +++ b/app/test-eventdev/evt_common.h
> @@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
>                         true : false;
>  }
>
> +static inline bool
> +evt_has_flow_id(uint8_t dev_id)
> +{
> +       struct rte_event_dev_info dev_info;
> +
> +       rte_event_dev_info_get(dev_id, &dev_info);
> +       return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
> +                       true : false;
> +}
> +
>  static inline int
>  evt_service_setup(uint32_t service_id)
>  {
> @@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
>                         .dequeue_timeout_ns = opt->deq_tmo_nsec,
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 0,
>                         .nb_events_limit  = info.max_num_events,
>                         .nb_event_queue_flows = opt->nb_flows,
>                         .nb_event_port_dequeue_depth =
> diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
> index 3366cfc..cfcb1dc 100644
> --- a/app/test-eventdev/test_order_atq.c
> +++ b/app/test-eventdev/test_order_atq.c
> @@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
>  }
>
>  static int
> -order_atq_worker(void *arg)
> +order_atq_worker(void *arg, const bool flow_id_cap)
>  {
>         ORDER_WORKER_INIT;
>         struct rte_event ev;
> @@ -34,6 +34,9 @@ order_atq_worker(void *arg)
>                         continue;
>                 }
>
> +               if (!flow_id_cap)
> +                       ev.flow_id = ev.mbuf->udata64;
> +
>                 if (ev.sub_event_type == 0) { /* stage 0 from producer */
>                         order_atq_process_stage_0(&ev);
>                         while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -50,7 +53,7 @@ order_atq_worker(void *arg)
>  }
>
>  static int
> -order_atq_worker_burst(void *arg)
> +order_atq_worker_burst(void *arg, const bool flow_id_cap)
>  {
>         ORDER_WORKER_INIT;
>         struct rte_event ev[BURST_SIZE];
> @@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
>                 }
>
>                 for (i = 0; i < nb_rx; i++) {
> +                       if (!flow_id_cap)
> +                               ev[i].flow_id = ev[i].mbuf->udata64;
> +
>                         if (ev[i].sub_event_type == 0) { /*stage 0 */
>                                 order_atq_process_stage_0(&ev[i]);
>                         } else if (ev[i].sub_event_type == 1) { /* stage 1 */
> @@ -95,11 +101,19 @@ worker_wrapper(void *arg)
>  {
>         struct worker_data *w  = arg;
>         const bool burst = evt_has_burst_mode(w->dev_id);
> -
> -       if (burst)
> -               return order_atq_worker_burst(arg);
> -       else
> -               return order_atq_worker(arg);
> +       const bool flow_id_cap = evt_has_flow_id(w->dev_id);
> +
> +       if (burst) {
> +               if (flow_id_cap)
> +                       return order_atq_worker_burst(arg, true);
> +               else
> +                       return order_atq_worker_burst(arg, false);
> +       } else {
> +               if (flow_id_cap)
> +                       return order_atq_worker(arg, true);
> +               else
> +                       return order_atq_worker(arg, false);
> +       }
>  }
>
>  static int
> diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
> index 4190f9a..7942390 100644
> --- a/app/test-eventdev/test_order_common.c
> +++ b/app/test-eventdev/test_order_common.c
> @@ -49,6 +49,7 @@ order_producer(void *arg)
>                 const uint32_t flow = (uintptr_t)m % nb_flows;
>                 /* Maintain seq number per flow */
>                 m->seqn = producer_flow_seq[flow]++;
> +               m->udata64 = flow;
>
>                 ev.flow_id = flow;
>                 ev.mbuf = m;
> diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
> index 495efd9..1511c00 100644
> --- a/app/test-eventdev/test_order_queue.c
> +++ b/app/test-eventdev/test_order_queue.c
> @@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
>  }
>
>  static int
> -order_queue_worker(void *arg)
> +order_queue_worker(void *arg, const bool flow_id_cap)
>  {
>         ORDER_WORKER_INIT;
>         struct rte_event ev;
> @@ -34,6 +34,9 @@ order_queue_worker(void *arg)
>                         continue;
>                 }
>
> +               if (!flow_id_cap)
> +                       ev.flow_id = ev.mbuf->udata64;
> +
>                 if (ev.queue_id == 0) { /* from ordered queue */
>                         order_queue_process_stage_0(&ev);
>                         while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -50,7 +53,7 @@ order_queue_worker(void *arg)
>  }
>
>  static int
> -order_queue_worker_burst(void *arg)
> +order_queue_worker_burst(void *arg, const bool flow_id_cap)
>  {
>         ORDER_WORKER_INIT;
>         struct rte_event ev[BURST_SIZE];
> @@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
>                 }
>
>                 for (i = 0; i < nb_rx; i++) {
> +
> +                       if (!flow_id_cap)
> +                               ev[i].flow_id = ev[i].mbuf->udata64;
> +
>                         if (ev[i].queue_id == 0) { /* from ordered queue */
>                                 order_queue_process_stage_0(&ev[i]);
>                         } else if (ev[i].queue_id == 1) {/* from atomic queue */
> @@ -95,11 +102,19 @@ worker_wrapper(void *arg)
>  {
>         struct worker_data *w  = arg;
>         const bool burst = evt_has_burst_mode(w->dev_id);
> -
> -       if (burst)
> -               return order_queue_worker_burst(arg);
> -       else
> -               return order_queue_worker(arg);
> +       const bool flow_id_cap = evt_has_flow_id(w->dev_id);
> +
> +       if (burst) {
> +               if (flow_id_cap)
> +                       return order_queue_worker_burst(arg, true);
> +               else
> +                       return order_queue_worker_burst(arg, false);
> +       } else {
> +               if (flow_id_cap)
> +                       return order_queue_worker(arg, true);
> +               else
> +                       return order_queue_worker(arg, false);
> +       }
>  }
>
>  static int
> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
> index 43ccb1c..62019c1 100644
> --- a/app/test/test_eventdev.c
> +++ b/app/test/test_eventdev.c
> @@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
>         if (!(info.event_dev_cap &
>               RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>                 pconf.enqueue_depth = info.max_event_port_enqueue_depth;
> -               pconf.disable_implicit_release = 1;
> +               pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>                 ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
>                 TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
> -               pconf.disable_implicit_release = 0;
> +               pconf.event_port_cfg = 0;
>         }
>
>         ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index 42ff4ee..f70ab0c 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
>         struct rte_event_dev_config config = {
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 1,
>                         .nb_events_limit  = 4096,
>                         .nb_event_queue_flows = 1024,
>                         .nb_event_port_dequeue_depth = 128,
> @@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
>                         .schedule_type = cdata.queue_type,
>                         .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
>                         .nb_atomic_flows = 1024,
> -               .nb_atomic_order_sequences = 1024,
> +                       .nb_atomic_order_sequences = 1024,
>         };
>         struct rte_event_queue_conf tx_q_conf = {
>                         .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
> @@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
>         disable_implicit_release = (dev_info.event_dev_cap &
>                         RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
>
> -       wkr_p_conf.disable_implicit_release = disable_implicit_release;
> +       wkr_p_conf.event_port_cfg = disable_implicit_release ?
> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>
>         if (dev_info.max_num_events < config.nb_events_limit)
>                 config.nb_events_limit = dev_info.max_num_events;
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index 55bb2f7..ca6cd20 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>         struct rte_event_dev_config config = {
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 0,
>                         .nb_events_limit  = 4096,
>                         .nb_event_queue_flows = 1024,
>                         .nb_event_port_dequeue_depth = 128,
> diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
> index 2dc95e5..9a3167c 100644
> --- a/examples/l2fwd-event/l2fwd_event_generic.c
> +++ b/examples/l2fwd-event/l2fwd_event_generic.c
> @@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |=
> +                       RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> +
>         evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
> index 63d57b4..203a14c 100644
> --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
> +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
> @@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |=
> +                       RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>                                                                 event_p_id++) {
> diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
> index f8c9843..c80573f 100644
> --- a/examples/l3fwd/l3fwd_event_generic.c
> +++ b/examples/l3fwd/l3fwd_event_generic.c
> @@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |=
> +                       RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> +
>         evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
> index 03ac581..9916a7f 100644
> --- a/examples/l3fwd/l3fwd_event_internal_port.c
> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
> @@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |=
> +                       RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>                                                                 event_p_id++) {
> --
> 2.6.4
>

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3] security: update session create API
  2020-10-10 22:11  2% ` [dpdk-dev] [PATCH v2] " Akhil Goyal
  2020-10-13  2:12  0%   ` Lukasz Wojciechowski
@ 2020-10-14 18:56  2%   ` Akhil Goyal
  2020-10-15  1:11  0%     ` Lukasz Wojciechowski
  1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-14 18:56 UTC (permalink / raw)
  To: dev
  Cc: thomas, mdr, anoobj, hemant.agrawal, konstantin.ananyev,
	declan.doherty, radu.nicolau, david.coyle, l.wojciechow,
	Akhil Goyal

The API ``rte_security_session_create`` takes only single
mempool for session and session private data. So the
application need to create mempool for twice the number of
sessions needed and will also lead to wastage of memory as
session private data need more memory compared to session.
Hence the API is modified to take two mempool pointers
- one for session and one for private data.
This is very similar to crypto based session create APIs.

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
Changes in v3:
fixed checkpatch issues.
Added new test in test_security.c for priv_mempool

Changes in V2:
incorporated comments from Lukasz and David.

 app/test-crypto-perf/cperf_ops.c       |   4 +-
 app/test-crypto-perf/main.c            |  12 +-
 app/test/test_cryptodev.c              |  18 ++-
 app/test/test_ipsec.c                  |   3 +-
 app/test/test_security.c               | 160 ++++++++++++++++++++++---
 doc/guides/prog_guide/rte_security.rst |   8 +-
 doc/guides/rel_notes/deprecation.rst   |   7 --
 doc/guides/rel_notes/release_20_11.rst |   6 +
 examples/ipsec-secgw/ipsec-secgw.c     |  12 +-
 examples/ipsec-secgw/ipsec.c           |   9 +-
 lib/librte_security/rte_security.c     |   7 +-
 lib/librte_security/rte_security.h     |   4 +-
 12 files changed, 196 insertions(+), 54 deletions(-)

diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 3da835a9c..3a64a2c34 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -621,7 +621,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create security session */
 		return (void *)rte_security_session_create(ctx,
-					&sess_conf, sess_mp);
+					&sess_conf, sess_mp, priv_mp);
 	}
 	if (options->op_type == CPERF_DOCSIS) {
 		enum rte_security_docsis_direction direction;
@@ -664,7 +664,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create security session */
 		return (void *)rte_security_session_create(ctx,
-					&sess_conf, priv_mp);
+					&sess_conf, sess_mp, priv_mp);
 	}
 #endif
 	sess = rte_cryptodev_sym_session_create(sess_mp);
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 62ae6048b..53864ffdd 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -156,7 +156,14 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
 		if (sess_size > max_sess_size)
 			max_sess_size = sess_size;
 	}
-
+#ifdef RTE_LIBRTE_SECURITY
+	for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) {
+		sess_size = rte_security_session_get_size(
+				rte_cryptodev_get_sec_ctx(cdev_id));
+		if (sess_size > max_sess_size)
+			max_sess_size = sess_size;
+	}
+#endif
 	/*
 	 * Calculate number of needed queue pairs, based on the amount
 	 * of available number of logical cores and crypto devices.
@@ -247,8 +254,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
 				opts->nb_qps * nb_slaves;
 #endif
 		} else
-			sessions_needed = enabled_cdev_count *
-						opts->nb_qps * 2;
+			sessions_needed = enabled_cdev_count * opts->nb_qps;
 
 		/*
 		 * A single session is required per queue pair
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index c7975ed01..9f1b92c51 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -773,9 +773,15 @@ testsuite_setup(void)
 	unsigned int session_size =
 		rte_cryptodev_sym_get_private_session_size(dev_id);
 
+#ifdef RTE_LIBRTE_SECURITY
+	unsigned int security_session_size = rte_security_session_get_size(
+			rte_cryptodev_get_sec_ctx(dev_id));
+
+	if (session_size < security_session_size)
+		session_size = security_session_size;
+#endif
 	/*
-	 * Create mempool with maximum number of sessions * 2,
-	 * to include the session headers
+	 * Create mempool with maximum number of sessions.
 	 */
 	if (info.sym.max_nb_sessions != 0 &&
 			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
@@ -7751,7 +7757,8 @@ test_pdcp_proto(int i, int oop,
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx,
-				&sess_conf, ts_params->session_priv_mpool);
+				&sess_conf, ts_params->session_mpool,
+				ts_params->session_priv_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s()-%d line %d failed %s: ",
@@ -8011,7 +8018,8 @@ test_pdcp_proto_SGL(int i, int oop,
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx,
-				&sess_conf, ts_params->session_priv_mpool);
+				&sess_conf, ts_params->session_mpool,
+				ts_params->session_priv_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s()-%d line %d failed %s: ",
@@ -8368,6 +8376,7 @@ test_docsis_proto_uplink(int i, struct docsis_test_data *d_td)
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
+					ts_params->session_mpool,
 					ts_params->session_priv_mpool);
 
 	if (!ut_params->sec_session) {
@@ -8543,6 +8552,7 @@ test_docsis_proto_downlink(int i, struct docsis_test_data *d_td)
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
+					ts_params->session_mpool,
 					ts_params->session_priv_mpool);
 
 	if (!ut_params->sec_session) {
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index 79d00d7e0..9ad07a179 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -632,7 +632,8 @@ create_dummy_sec_session(struct ipsec_unitest_params *ut,
 	static struct rte_security_session_conf conf;
 
 	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
-					&conf, qp->mp_session_private);
+					&conf, qp->mp_session,
+					qp->mp_session_private);
 
 	if (ut->ss[j].security.ses == NULL)
 		return -ENOMEM;
diff --git a/app/test/test_security.c b/app/test/test_security.c
index 77fd5adc6..35ed6ff10 100644
--- a/app/test/test_security.c
+++ b/app/test/test_security.c
@@ -200,6 +200,24 @@
 			expected_mempool_usage, mempool_usage);		\
 } while (0)
 
+/**
+ * Verify usage of mempool by checking if number of allocated objects matches
+ * expectations. The mempool is used to manage objects for sessions priv data.
+ * A single object is acquired from mempool during session_create
+ * and put back in session_destroy.
+ *
+ * @param   expected_priv_mp_usage	expected number of used priv mp objects
+ */
+#define TEST_ASSERT_PRIV_MP_USAGE(expected_priv_mp_usage) do {		\
+	struct security_testsuite_params *ts_params = &testsuite_params;\
+	unsigned int priv_mp_usage;					\
+	priv_mp_usage = rte_mempool_in_use_count(			\
+			ts_params->session_priv_mpool);			\
+	TEST_ASSERT_EQUAL(expected_priv_mp_usage, priv_mp_usage,	\
+			"Expecting %u priv mempool allocations, "		\
+			"but there are %u allocated objects",		\
+			expected_priv_mp_usage, priv_mp_usage);		\
+} while (0)
 
 /**
  * Mockup structures and functions for rte_security_ops;
@@ -237,27 +255,38 @@ static struct mock_session_create_data {
 	struct rte_security_session_conf *conf;
 	struct rte_security_session *sess;
 	struct rte_mempool *mp;
+	struct rte_mempool *priv_mp;
 
 	int ret;
 
 	int called;
 	int failed;
-} mock_session_create_exp = {NULL, NULL, NULL, NULL, 0, 0, 0};
+} mock_session_create_exp = {NULL, NULL, NULL, NULL, NULL, 0, 0, 0};
 
 static int
 mock_session_create(void *device,
 		struct rte_security_session_conf *conf,
 		struct rte_security_session *sess,
-		struct rte_mempool *mp)
+		struct rte_mempool *priv_mp)
 {
+	void *sess_priv;
+	int ret;
+
 	mock_session_create_exp.called++;
 
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, device);
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, conf);
-	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, mp);
+	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
+	ret = rte_mempool_get(priv_mp, &sess_priv);
+	TEST_ASSERT_EQUAL(0, ret,
+		"priv mempool does not have enough objects");
 
+	set_sec_session_private_data(sess, sess_priv);
 	mock_session_create_exp.sess = sess;
 
+	if (mock_session_create_exp.ret != 0)
+		rte_mempool_put(priv_mp, sess_priv);
+
 	return mock_session_create_exp.ret;
 }
 
@@ -363,8 +392,10 @@ static struct mock_session_destroy_data {
 static int
 mock_session_destroy(void *device, struct rte_security_session *sess)
 {
-	mock_session_destroy_exp.called++;
+	void *sess_priv = get_sec_session_private_data(sess);
 
+	mock_session_destroy_exp.called++;
+	rte_mempool_put(rte_mempool_from_obj(sess_priv), sess_priv);
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, device);
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, sess);
 
@@ -502,6 +533,7 @@ struct rte_security_ops mock_ops = {
  */
 static struct security_testsuite_params {
 	struct rte_mempool *session_mpool;
+	struct rte_mempool *session_priv_mpool;
 } testsuite_params = { NULL };
 
 /**
@@ -524,9 +556,11 @@ static struct security_unittest_params {
 	.sess = NULL,
 };
 
-#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestsMempoolName"
+#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestMp"
+#define SECURITY_TEST_PRIV_MEMPOOL_NAME "SecurityTestPrivMp"
 #define SECURITY_TEST_MEMPOOL_SIZE 15
-#define SECURITY_TEST_SESSION_OBJECT_SIZE sizeof(struct rte_security_session)
+#define SECURITY_TEST_SESSION_OBJ_SZ sizeof(struct rte_security_session)
+#define SECURITY_TEST_SESSION_PRIV_OBJ_SZ 64
 
 /**
  * testsuite_setup initializes whole test suite parameters.
@@ -540,11 +574,27 @@ testsuite_setup(void)
 	ts_params->session_mpool = rte_mempool_create(
 			SECURITY_TEST_MEMPOOL_NAME,
 			SECURITY_TEST_MEMPOOL_SIZE,
-			SECURITY_TEST_SESSION_OBJECT_SIZE,
+			SECURITY_TEST_SESSION_OBJ_SZ,
 			0, 0, NULL, NULL, NULL, NULL,
 			SOCKET_ID_ANY, 0);
 	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
 			"Cannot create mempool %s\n", rte_strerror(rte_errno));
+
+	ts_params->session_priv_mpool = rte_mempool_create(
+			SECURITY_TEST_PRIV_MEMPOOL_NAME,
+			SECURITY_TEST_MEMPOOL_SIZE,
+			SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
+			0, 0, NULL, NULL, NULL, NULL,
+			SOCKET_ID_ANY, 0);
+	if (ts_params->session_priv_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "TestCase %s() line %d failed (null): "
+				"Cannot create priv mempool %s\n",
+				__func__, __LINE__, rte_strerror(rte_errno));
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+		return TEST_FAILED;
+	}
+
 	return TEST_SUCCESS;
 }
 
@@ -559,6 +609,10 @@ testsuite_teardown(void)
 		rte_mempool_free(ts_params->session_mpool);
 		ts_params->session_mpool = NULL;
 	}
+	if (ts_params->session_priv_mpool) {
+		rte_mempool_free(ts_params->session_priv_mpool);
+		ts_params->session_priv_mpool = NULL;
+	}
 }
 
 /**
@@ -656,10 +710,12 @@ ut_setup_with_session(void)
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
+	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = 0;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
 			sess);
 	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -701,11 +757,13 @@ test_session_create_inv_context(void)
 	struct rte_security_session *sess;
 
 	sess = rte_security_session_create(NULL, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -725,11 +783,13 @@ test_session_create_inv_context_ops(void)
 	ut_params->ctx.ops = NULL;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -749,11 +809,13 @@ test_session_create_inv_context_ops_fun(void)
 	ut_params->ctx.ops = &empty_ops;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -770,18 +832,21 @@ test_session_create_inv_configuration(void)
 	struct rte_security_session *sess;
 
 	sess = rte_security_session_create(&ut_params->ctx, NULL,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
 }
 
 /**
- * Test execution of rte_security_session_create with NULL mp parameter
+ * Test execution of rte_security_session_create with NULL session
+ * mempool
  */
 static int
 test_session_create_inv_mempool(void)
@@ -790,11 +855,35 @@ test_session_create_inv_mempool(void)
 	struct rte_security_session *sess;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			NULL);
+			NULL, NULL);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
+	TEST_ASSERT_SESSION_COUNT(0);
+
+	return TEST_SUCCESS;
+}
+
+/**
+ * Test execution of rte_security_session_create with NULL session
+ * priv mempool
+ */
+static int
+test_session_create_inv_sess_priv_mempool(void)
+{
+	struct security_unittest_params *ut_params = &unittest_params;
+	struct security_testsuite_params *ts_params = &testsuite_params;
+	struct rte_security_session *sess;
+
+	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
+			ts_params->session_mpool, NULL);
+	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
+			sess, NULL, "%p");
+	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
+	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -810,6 +899,7 @@ test_session_create_mempool_empty(void)
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
 	struct rte_security_session *tmp[SECURITY_TEST_MEMPOOL_SIZE];
+	void *tmp1[SECURITY_TEST_MEMPOOL_SIZE];
 	struct rte_security_session *sess;
 
 	/* Get all available objects from mempool. */
@@ -820,21 +910,34 @@ test_session_create_mempool_empty(void)
 		TEST_ASSERT_EQUAL(0, ret,
 				"Expect getting %d object from mempool"
 				" to succeed", i);
+		ret = rte_mempool_get(ts_params->session_priv_mpool,
+				(void **)(&tmp1[i]));
+		TEST_ASSERT_EQUAL(0, ret,
+				"Expect getting %d object from priv mempool"
+				" to succeed", i);
 	}
 	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
+	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
+	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	/* Put objects back to the pool. */
-	for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i)
-		rte_mempool_put(ts_params->session_mpool, (void *)(tmp[i]));
+	for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i) {
+		rte_mempool_put(ts_params->session_mpool,
+				(void *)(tmp[i]));
+		rte_mempool_put(ts_params->session_priv_mpool,
+				(tmp1[i]));
+	}
 	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
 
 	return TEST_SUCCESS;
 }
@@ -853,14 +956,17 @@ test_session_create_ops_failure(void)
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
+	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = -1;	/* Return failure status. */
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -879,10 +985,12 @@ test_session_create_success(void)
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
+	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = 0;	/* Return success status. */
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
 			sess);
 	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -891,6 +999,7 @@ test_session_create_success(void)
 			sess, mock_session_create_exp.sess);
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	/*
@@ -1276,6 +1385,7 @@ test_session_destroy_inv_context(void)
 	struct security_unittest_params *ut_params = &unittest_params;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(NULL, ut_params->sess);
@@ -1283,6 +1393,7 @@ test_session_destroy_inv_context(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1299,6 +1410,7 @@ test_session_destroy_inv_context_ops(void)
 	ut_params->ctx.ops = NULL;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1307,6 +1419,7 @@ test_session_destroy_inv_context_ops(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1323,6 +1436,7 @@ test_session_destroy_inv_context_ops_fun(void)
 	ut_params->ctx.ops = &empty_ops;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1331,6 +1445,7 @@ test_session_destroy_inv_context_ops_fun(void)
 			ret, -ENOTSUP, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1345,6 +1460,7 @@ test_session_destroy_inv_session(void)
 	struct security_unittest_params *ut_params = &unittest_params;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx, NULL);
@@ -1352,6 +1468,7 @@ test_session_destroy_inv_session(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1371,6 +1488,7 @@ test_session_destroy_ops_failure(void)
 	mock_session_destroy_exp.ret = -1;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1396,6 +1514,7 @@ test_session_destroy_success(void)
 	mock_session_destroy_exp.sess = ut_params->sess;
 	mock_session_destroy_exp.ret = 0;
 	TEST_ASSERT_MEMPOOL_USAGE(1);
+	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1404,6 +1523,7 @@ test_session_destroy_success(void)
 			ret, 0, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
+	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	/*
@@ -2370,6 +2490,8 @@ static struct unit_test_suite security_testsuite  = {
 				test_session_create_inv_configuration),
 		TEST_CASE_ST(ut_setup, ut_teardown,
 				test_session_create_inv_mempool),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_session_create_inv_sess_priv_mempool),
 		TEST_CASE_ST(ut_setup, ut_teardown,
 				test_session_create_mempool_empty),
 		TEST_CASE_ST(ut_setup, ut_teardown,
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index 127da2e4f..d30a79576 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -533,8 +533,12 @@ and this allows further acceleration of the offload of Crypto workloads.
 
 The Security framework provides APIs to create and free sessions for crypto/ethernet
 devices, where sessions are mempool objects. It is the application's responsibility
-to create and manage the session mempools. The mempool object size should be able to
-accommodate the driver's private data of security session.
+to create and manage two session mempools - one for session and other for session
+private data. The private session data mempool object size should be able to
+accommodate the driver's private data of security session. The application can get
+the size of session private data using API ``rte_security_session_get_size``.
+And the session mempool object size should be enough to accommodate
+``rte_security_session``.
 
 Once the session mempools have been created, ``rte_security_session_create()``
 is used to allocate and initialize a session for the required crypto/ethernet device.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 43cdd3c58..26be1b3de 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -164,13 +164,6 @@ Deprecation Notices
   following the IPv6 header, as proposed in RFC
   https://mails.dpdk.org/archives/dev/2020-August/177257.html.
 
-* security: The API ``rte_security_session_create`` takes only single mempool
-  for session and session private data. So the application need to create
-  mempool for twice the number of sessions needed and will also lead to
-  wastage of memory as session private data need more memory compared to session.
-  Hence the API will be modified to take two mempool pointers - one for session
-  and one for private data.
-
 * cryptodev: support for using IV with all sizes is added, J0 still can
   be used but only when IV length in following structs ``rte_crypto_auth_xform``,
   ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index f1b9b4dfe..0fb1b20cb 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -344,6 +344,12 @@ API Changes
 * The structure ``rte_crypto_sym_vec`` is updated to support both
   cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
 
+* security: The API ``rte_security_session_create`` is updated to take two
+  mempool objects one for session and other for session private data.
+  So the application need to create two mempools and get the size of session
+  private data using API ``rte_security_session_get_size`` for private session
+  mempool.
+
 
 ABI Changes
 -----------
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 60132c4bd..2326089bb 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2348,12 +2348,8 @@ session_pool_init(struct socket_ctx *ctx, int32_t socket_id, size_t sess_sz)
 
 	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
 			"sess_mp_%u", socket_id);
-	/*
-	 * Doubled due to rte_security_session_create() uses one mempool for
-	 * session and for session private data.
-	 */
 	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
-		rte_lcore_count()) * 2;
+		rte_lcore_count());
 	sess_mp = rte_cryptodev_sym_session_pool_create(
 			mp_name, nb_sess, sess_sz, CDEV_MP_CACHE_SZ, 0,
 			socket_id);
@@ -2376,12 +2372,8 @@ session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
 
 	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
 			"sess_mp_priv_%u", socket_id);
-	/*
-	 * Doubled due to rte_security_session_create() uses one mempool for
-	 * session and for session private data.
-	 */
 	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
-		rte_lcore_count()) * 2;
+		rte_lcore_count());
 	sess_mp = rte_mempool_create(mp_name,
 			nb_sess,
 			sess_sz,
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 01faa7ac7..6baeeb342 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -117,7 +117,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			set_ipsec_conf(sa, &(sess_conf.ipsec));
 
 			ips->security.ses = rte_security_session_create(ctx,
-					&sess_conf, ipsec_ctx->session_priv_pool);
+					&sess_conf, ipsec_ctx->session_pool,
+					ipsec_ctx->session_priv_pool);
 			if (ips->security.ses == NULL) {
 				RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
@@ -198,7 +199,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
 		}
 
 		ips->security.ses = rte_security_session_create(sec_ctx,
-				&sess_conf, skt_ctx->session_pool);
+				&sess_conf, skt_ctx->session_pool,
+				skt_ctx->session_priv_pool);
 		if (ips->security.ses == NULL) {
 			RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
@@ -378,7 +380,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
 		sess_conf.userdata = (void *) sa;
 
 		ips->security.ses = rte_security_session_create(sec_ctx,
-					&sess_conf, skt_ctx->session_pool);
+					&sess_conf, skt_ctx->session_pool,
+					skt_ctx->session_priv_pool);
 		if (ips->security.ses == NULL) {
 			RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
index 515c29e04..ee4666026 100644
--- a/lib/librte_security/rte_security.c
+++ b/lib/librte_security/rte_security.c
@@ -26,18 +26,21 @@
 struct rte_security_session *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
-			    struct rte_mempool *mp)
+			    struct rte_mempool *mp,
+			    struct rte_mempool *priv_mp)
 {
 	struct rte_security_session *sess = NULL;
 
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_create, NULL, NULL);
 	RTE_PTR_OR_ERR_RET(conf, NULL);
 	RTE_PTR_OR_ERR_RET(mp, NULL);
+	RTE_PTR_OR_ERR_RET(priv_mp, NULL);
 
 	if (rte_mempool_get(mp, (void **)&sess))
 		return NULL;
 
-	if (instance->ops->session_create(instance->device, conf, sess, mp)) {
+	if (instance->ops->session_create(instance->device, conf,
+				sess, priv_mp)) {
 		rte_mempool_put(mp, (void *)sess);
 		return NULL;
 	}
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 16839e539..1710cdd6a 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -386,6 +386,7 @@ struct rte_security_session {
  * @param   instance	security instance
  * @param   conf	session configuration parameters
  * @param   mp		mempool to allocate session objects from
+ * @param   priv_mp	mempool to allocate session private data objects from
  * @return
  *  - On success, pointer to session
  *  - On failure, NULL
@@ -393,7 +394,8 @@ struct rte_security_session {
 struct rte_security_session *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
-			    struct rte_mempool *mp);
+			    struct rte_mempool *mp,
+			    struct rte_mempool *priv_mp);
 
 /**
  * Update security session as specified by the session configuration
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v3] eventdev: update app and examples for new eventdev ABI
  @ 2020-10-14 17:33  6% ` Timothy McDaniel
  2020-10-14 20:01  4%   ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Timothy McDaniel @ 2020-10-14 17:33 UTC (permalink / raw)
  To: Jerin Jacob, Harry van Haaren, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, erik.g.carrillo, gage.eads

Several data structures and constants changed, or were added,
in the previous patch.  This commit updates the dependent
apps and examples to use the new ABI.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula at marvell.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
 app/test-eventdev/evt_common.h                     | 11 ++++++++
 app/test-eventdev/test_order_atq.c                 | 28 +++++++++++++++------
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 ++++++++++++++++------
 app/test/test_eventdev.c                           |  4 +--
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +++--
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++++--
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +++--
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++++--
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +++--
 11 files changed, 80 insertions(+), 26 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
 			true : false;
 }
 
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+	struct rte_event_dev_info dev_info;
+
+	rte_event_dev_info_get(dev_id, &dev_info);
+	return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+			true : false;
+}
+
 static inline int
 evt_service_setup(uint32_t service_id)
 {
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
 			.dequeue_timeout_ns = opt->deq_tmo_nsec,
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = info.max_num_events,
 			.nb_event_queue_flows = opt->nb_flows,
 			.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.sub_event_type == 0) { /* stage 0 from producer */
 			order_atq_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
 }
 
 static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].sub_event_type == 0) { /*stage 0 */
 				order_atq_process_stage_0(&ev[i]);
 			} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_atq_worker_burst(arg);
-	else
-		return order_atq_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_atq_worker_burst(arg, true);
+		else
+			return order_atq_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_atq_worker(arg, true);
+		else
+			return order_atq_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
 		const uint32_t flow = (uintptr_t)m % nb_flows;
 		/* Maintain seq number per flow */
 		m->seqn = producer_flow_seq[flow]++;
+		m->udata64 = flow;
 
 		ev.flow_id = flow;
 		ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.queue_id == 0) { /* from ordered queue */
 			order_queue_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
 }
 
 static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].queue_id == 0) { /* from ordered queue */
 				order_queue_process_stage_0(&ev[i]);
 			} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_queue_worker_burst(arg);
-	else
-		return order_queue_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_queue_worker_burst(arg, true);
+		else
+			return order_queue_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_queue_worker(arg, true);
+		else
+			return order_queue_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
 	if (!(info.event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		pconf.enqueue_depth = info.max_event_port_enqueue_depth;
-		pconf.disable_implicit_release = 1;
+		pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 		ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
 		TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
-		pconf.disable_implicit_release = 0;
+		pconf.event_port_cfg = 0;
 	}
 
 	ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 1,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 			.schedule_type = cdata.queue_type,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
-		.nb_atomic_order_sequences = 1024,
+			.nb_atomic_order_sequences = 1024,
 	};
 	struct rte_event_queue_conf tx_q_conf = {
 			.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	disable_implicit_release = (dev_info.event_dev_cap &
 			RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
 
-	wkr_p_conf.disable_implicit_release = disable_implicit_release;
+	wkr_p_conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (dev_info.max_num_events < config.nb_events_limit)
 		config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
-- 
2.6.4


^ permalink raw reply	[relevance 6%]

* Re: [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets
  2020-10-14 16:35  3% ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Dekel Peled
  2020-10-14 16:35  4%   ` [dpdk-dev] [PATCH v7 1/5] ethdev: add extensions attributes to IPv6 item Dekel Peled
@ 2020-10-14 17:18  0%   ` Ferruh Yigit
  1 sibling, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-10-14 17:18 UTC (permalink / raw)
  To: Dekel Peled, orika, thomas, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

On 10/14/2020 5:35 PM, Dekel Peled wrote:
> This series implements support of matching on packets based on the
> fragmentation attribute of the packet, i.e. if packet is a fragment
> of a larger packet, or the opposite - packet is not a fragment.
> 
> In ethdev, add API to support IPv6 extension headers, and specifically
> the IPv6 fragment extension header item.
> Testpmd CLI is updated accordingly.
> Documentation is updated accordingly.
> 
> ---
> v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
> v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid ABI breakage.
> v4: update rte_flow documentation to clarify use of IPv6 extension header flags.
> v5: update following rebase on recent ICMP changes.
> v6: - move MLX5 PMD patches to separate series.
>      - rename IPv6 extension flags for clarity (e.g. frag_ext_exist renamed to has_frag_ext).
> v7: remove the announcement from deprecation file.
> ---
> 
> Dekel Peled (5):
>    ethdev: add extensions attributes to IPv6 item
>    ethdev: add IPv6 fragment extension header item
>    app/testpmd: support IPv4 fragments
>    app/testpmd: support IPv6 fragments
>    app/testpmd: support IPv6 fragment extension item
> 

Series applied to dpdk-next-net/main, thanks.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets
    @ 2020-10-14 16:35  3% ` Dekel Peled
  2020-10-14 16:35  4%   ` [dpdk-dev] [PATCH v7 1/5] ethdev: add extensions attributes to IPv6 item Dekel Peled
  2020-10-14 17:18  0%   ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Ferruh Yigit
  1 sibling, 2 replies; 200+ results
From: Dekel Peled @ 2020-10-14 16:35 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This series implements support of matching on packets based on the
fragmentation attribute of the packet, i.e. if packet is a fragment
of a larger packet, or the opposite - packet is not a fragment.

In ethdev, add API to support IPv6 extension headers, and specifically
the IPv6 fragment extension header item.
Testpmd CLI is updated accordingly.
Documentation is updated accordingly.

---
v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid ABI breakage.
v4: update rte_flow documentation to clarify use of IPv6 extension header flags.
v5: update following rebase on recent ICMP changes.
v6: - move MLX5 PMD patches to separate series.
    - rename IPv6 extension flags for clarity (e.g. frag_ext_exist renamed to has_frag_ext).
v7: remove the announcement from deprecation file.
---

Dekel Peled (5):
  ethdev: add extensions attributes to IPv6 item
  ethdev: add IPv6 fragment extension header item
  app/testpmd: support IPv4 fragments
  app/testpmd: support IPv6 fragments
  app/testpmd: support IPv6 fragment extension item

 app/test-pmd/cmdline_flow.c            | 53 ++++++++++++++++++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst     | 32 ++++++++++++++++++--
 doc/guides/rel_notes/deprecation.rst   |  5 ----
 doc/guides/rel_notes/release_20_11.rst |  5 ++++
 lib/librte_ethdev/rte_flow.c           |  1 +
 lib/librte_ethdev/rte_flow.h           | 43 +++++++++++++++++++++++++--
 lib/librte_ip_frag/rte_ip_frag.h       | 26 ++---------------
 lib/librte_net/rte_ip.h                | 26 +++++++++++++++--
 8 files changed, 155 insertions(+), 36 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 1/5] ethdev: add extensions attributes to IPv6 item
  2020-10-14 16:35  3% ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Dekel Peled
@ 2020-10-14 16:35  4%   ` Dekel Peled
  2020-10-14 17:18  0%   ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Ferruh Yigit
  1 sibling, 0 replies; 200+ results
From: Dekel Peled @ 2020-10-14 16:35 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

Using the current implementation of DPDK, an application cannot match on
IPv6 packets, based on the existing extension headers, in a simple way.

Field 'Next Header' in IPv6 header indicates type of the first extension
header only. Following extension headers can't be identified by
inspecting the IPv6 header.
As a result, the existence or absence of specific extension headers
can't be used for packet matching.

For example, fragmented IPv6 packets contain a dedicated extension header
(which is implemented in a later patch of this series).
Non-fragmented packets don't contain the fragment extension header.
For an application to match on non-fragmented IPv6 packets, the current
implementation doesn't provide a suitable solution.
Matching on the Next Header field is not sufficient, since additional
extension headers might be present in the same packet.
To match on fragmented IPv6 packets, the same difficulty exists.

This patch implements the update as detailed in RFC [1].
A set of additional values will be added to IPv6 header struct.
These values will indicate the existence of every defined extension
header type, providing simple means for identification of existing
extensions in the packet header.
Continuing the above example, fragmented packets can be identified using
the specific value indicating existence of fragment extension header.
To match on non-fragmented IPv6 packets, need to use has_frag_ext 0.
To match on fragmented IPv6 packets, need to use has_frag_ext 1.
To match on any IPv6 packets, the has_frag_ext field should
not be specified for match.

[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
 doc/guides/prog_guide/rte_flow.rst     | 20 +++++++++++++++++---
 doc/guides/rel_notes/deprecation.rst   |  5 -----
 doc/guides/rel_notes/release_20_11.rst |  5 +++++
 lib/librte_ethdev/rte_flow.h           | 23 +++++++++++++++++++++--
 4 files changed, 43 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index f26a6c2..97fdf2a 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -946,11 +946,25 @@ Item: ``IPV6``
 
 Matches an IPv6 header.
 
-Note: IPv6 options are handled by dedicated pattern items, see `Item:
-IPV6_EXT`_.
+Dedicated flags indicate if header contains specific extension headers.
+To match on packets containing a specific extension header, an application
+should match on the dedicated flag set to 1.
+To match on packets not containing a specific extension header, an application
+should match on the dedicated flag clear to 0.
+In case application doesn't care about the existence of a specific extension
+header, it should not specify the dedicated flag for matching.
 
 - ``hdr``: IPv6 header definition (``rte_ip.h``).
-- Default ``mask`` matches source and destination addresses only.
+- ``has_hop_ext``: header contains Hop-by-Hop Options extension header.
+- ``has_route_ext``: header contains Routing extension header.
+- ``has_frag_ext``: header contains Fragment extension header.
+- ``has_auth_ext``: header contains Authentication extension header.
+- ``has_esp_ext``: header contains Encapsulation Security Payload extension header.
+- ``has_dest_ext``: header contains Destination Options extension header.
+- ``has_mobil_ext``: header contains Mobility extension header.
+- ``has_hip_ext``: header contains Host Identity Protocol extension header.
+- ``has_shim6_ext``: header contains Shim6 Protocol extension header.
+- Default ``mask`` matches ``hdr`` source and destination addresses only.
 
 Item: ``ICMP``
 ^^^^^^^^^^^^^^
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 584e720..87a7c44 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -159,11 +159,6 @@ Deprecation Notices
   or absence of a VLAN header following the current header, as proposed in RFC
   https://mails.dpdk.org/archives/dev/2020-August/177536.html.
 
-* ethdev: The ``struct rte_flow_item_ipv6`` struct will be modified to include
-  additional values, indicating existence or absence of IPv6 extension headers
-  following the IPv6 header, as proposed in RFC
-  https://mails.dpdk.org/archives/dev/2020-August/177257.html.
-
 * security: The API ``rte_security_session_create`` takes only single mempool
   for session and session private data. So the application need to create
   mempool for twice the number of sessions needed and will also lead to
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 30db8f2..730e9df 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -351,6 +351,11 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * Added extensions' attributes to struct ``rte_flow_item_ipv6``.
+    A set of additional values added to struct, indicating the existence of
+    every defined extension header type.
+    Applications should use the new values for identification of existing
+    extensions in the packet header.
 
 Known Issues
 ------------
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 3d5fb09..aa18925 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -792,11 +792,30 @@ struct rte_flow_item_ipv4 {
  *
  * Matches an IPv6 header.
  *
- * Note: IPv6 options are handled by dedicated pattern items, see
- * RTE_FLOW_ITEM_TYPE_IPV6_EXT.
+ * Dedicated flags indicate if header contains specific extension headers.
  */
 struct rte_flow_item_ipv6 {
 	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
+	uint32_t has_hop_ext:1;
+	/**< Header contains Hop-by-Hop Options extension header. */
+	uint32_t has_route_ext:1;
+	/**< Header contains Routing extension header. */
+	uint32_t has_frag_ext:1;
+	/**< Header contains Fragment extension header. */
+	uint32_t has_auth_ext:1;
+	/**< Header contains Authentication extension header. */
+	uint32_t has_esp_ext:1;
+	/**< Header contains Encapsulation Security Payload extension header. */
+	uint32_t has_dest_ext:1;
+	/**< Header contains Destination Options extension header. */
+	uint32_t has_mobil_ext:1;
+	/**< Header contains Mobility extension header. */
+	uint32_t has_hip_ext:1;
+	/**< Header contains Host Identity Protocol extension header. */
+	uint32_t has_shim6_ext:1;
+	/**< Header contains Shim6 Protocol extension header. */
+	uint32_t reserved:23;
+	/**< Reserved for future extension headers, must be zero. */
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_IPV6. */
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v6 03/18] eal: rename lcore word choices
  @ 2020-10-14 15:27  1%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-14 15:27 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Anatoly Burakov

Replace master lcore with main lcore and
replace slave lcore with worker lcore.

Keep the old functions and macros but mark them as deprecated
for this release.

The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.

Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/rel_notes/deprecation.rst       | 19 -------
 doc/guides/rel_notes/release_20_11.rst     | 11 ++++
 lib/librte_eal/common/eal_common_dynmem.c  | 10 ++--
 lib/librte_eal/common/eal_common_launch.c  | 36 ++++++------
 lib/librte_eal/common/eal_common_lcore.c   |  8 +--
 lib/librte_eal/common/eal_common_options.c | 64 ++++++++++++----------
 lib/librte_eal/common/eal_options.h        |  2 +
 lib/librte_eal/common/eal_private.h        |  6 +-
 lib/librte_eal/common/rte_random.c         |  2 +-
 lib/librte_eal/common/rte_service.c        |  2 +-
 lib/librte_eal/freebsd/eal.c               | 28 +++++-----
 lib/librte_eal/freebsd/eal_thread.c        | 32 +++++------
 lib/librte_eal/include/rte_eal.h           |  4 +-
 lib/librte_eal/include/rte_eal_trace.h     |  4 +-
 lib/librte_eal/include/rte_launch.h        | 60 ++++++++++----------
 lib/librte_eal/include/rte_lcore.h         | 35 ++++++++----
 lib/librte_eal/linux/eal.c                 | 28 +++++-----
 lib/librte_eal/linux/eal_memory.c          | 10 ++--
 lib/librte_eal/linux/eal_thread.c          | 32 +++++------
 lib/librte_eal/rte_eal_version.map         |  2 +-
 lib/librte_eal/windows/eal.c               | 16 +++---
 lib/librte_eal/windows/eal_thread.c        | 30 +++++-----
 22 files changed, 230 insertions(+), 211 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 584e72087934..7271e9ca4d39 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -20,25 +20,6 @@ Deprecation Notices
 * kvargs: The function ``rte_kvargs_process`` will get a new parameter
   for returning key match count. It will ease handling of no-match case.
 
-* eal: To be more inclusive in choice of naming, the DPDK project
-  will replace uses of master/slave in the API's and command line arguments.
-
-  References to master/slave in relation to lcore will be renamed
-  to initial/worker.  The function ``rte_get_master_lcore()``
-  will be renamed to ``rte_get_initial_lcore()``.
-  For the 20.11 release, both names will be present and the
-  old function will be marked with the deprecated tag.
-  The old function will be removed in a future version.
-
-  The iterator for worker lcores will also change:
-  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
-  ``RTE_LCORE_FOREACH_WORKER``.
-
-  The ``master-lcore`` argument to testpmd will be replaced
-  with ``initial-lcore``. The old ``master-lcore`` argument
-  will produce a runtime notification in 20.11 release, and
-  be removed completely in a future release.
-
 * eal: The terms blacklist and whitelist to describe devices used
   by DPDK will be replaced in the 20.11 relase.
   This will apply to command line arguments as well as macros.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 71665c1de65f..bbc64ea2e3a6 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -298,6 +298,17 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* eal: Changed the function ``rte_get_master_lcore()`` is
+  replaced to ``rte_get_main_lcore()``. The old function is deprecated.
+
+  The iterator for worker lcores will also change:
+  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
+  ``RTE_LCORE_FOREACH_WORKER``.
+
+  The ``master-lcore`` argument to testpmd will be replaced
+  with ``main-lcore``. The old ``master-lcore`` argument
+  will produce a runtime notification in 20.11 release, and
+  be removed completely in a future release.
 
 ABI Changes
 -----------
diff --git a/lib/librte_eal/common/eal_common_dynmem.c b/lib/librte_eal/common/eal_common_dynmem.c
index 614648d8a4de..1cefe52443c4 100644
--- a/lib/librte_eal/common/eal_common_dynmem.c
+++ b/lib/librte_eal/common/eal_common_dynmem.c
@@ -427,19 +427,19 @@ eal_dynmem_calc_num_pages_per_socket(
 			total_size -= default_size;
 		}
 #else
-		/* in 32-bit mode, allocate all of the memory only on master
+		/* in 32-bit mode, allocate all of the memory only on main
 		 * lcore socket
 		 */
 		total_size = internal_conf->memory;
 		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
 				socket++) {
 			struct rte_config *cfg = rte_eal_get_configuration();
-			unsigned int master_lcore_socket;
+			unsigned int main_lcore_socket;
 
-			master_lcore_socket =
-				rte_lcore_to_socket_id(cfg->master_lcore);
+			main_lcore_socket =
+				rte_lcore_to_socket_id(cfg->main_lcore);
 
-			if (master_lcore_socket != socket)
+			if (main_lcore_socket != socket)
 				continue;
 
 			/* Update sizes */
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index cf52d717f68e..34f854ad80c8 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -21,55 +21,55 @@
  * Wait until a lcore finished its job.
  */
 int
-rte_eal_wait_lcore(unsigned slave_id)
+rte_eal_wait_lcore(unsigned worker_id)
 {
-	if (lcore_config[slave_id].state == WAIT)
+	if (lcore_config[worker_id].state == WAIT)
 		return 0;
 
-	while (lcore_config[slave_id].state != WAIT &&
-	       lcore_config[slave_id].state != FINISHED)
+	while (lcore_config[worker_id].state != WAIT &&
+	       lcore_config[worker_id].state != FINISHED)
 		rte_pause();
 
 	rte_rmb();
 
 	/* we are in finished state, go to wait state */
-	lcore_config[slave_id].state = WAIT;
-	return lcore_config[slave_id].ret;
+	lcore_config[worker_id].state = WAIT;
+	return lcore_config[worker_id].ret;
 }
 
 /*
- * Check that every SLAVE lcores are in WAIT state, then call
- * rte_eal_remote_launch() for all of them. If call_master is true
- * (set to CALL_MASTER), also call the function on the master lcore.
+ * Check that every WORKER lcores are in WAIT state, then call
+ * rte_eal_remote_launch() for all of them. If call_main is true
+ * (set to CALL_MAIN), also call the function on the main lcore.
  */
 int
 rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
-			 enum rte_rmt_call_master_t call_master)
+			 enum rte_rmt_call_main_t call_main)
 {
 	int lcore_id;
-	int master = rte_get_master_lcore();
+	int main_lcore = rte_get_main_lcore();
 
 	/* check state of lcores */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (lcore_config[lcore_id].state != WAIT)
 			return -EBUSY;
 	}
 
 	/* send messages to cores */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		rte_eal_remote_launch(f, arg, lcore_id);
 	}
 
-	if (call_master == CALL_MASTER) {
-		lcore_config[master].ret = f(arg);
-		lcore_config[master].state = FINISHED;
+	if (call_main == CALL_MAIN) {
+		lcore_config[main_lcore].ret = f(arg);
+		lcore_config[main_lcore].state = FINISHED;
 	}
 
 	return 0;
 }
 
 /*
- * Return the state of the lcore identified by slave_id.
+ * Return the state of the lcore identified by worker_id.
  */
 enum rte_lcore_state_t
 rte_eal_get_lcore_state(unsigned lcore_id)
@@ -86,7 +86,7 @@ rte_eal_mp_wait_lcore(void)
 {
 	unsigned lcore_id;
 
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		rte_eal_wait_lcore(lcore_id);
 	}
 }
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index d64569b3c758..66d6bad1a7d7 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -18,9 +18,9 @@
 #include "eal_private.h"
 #include "eal_thread.h"
 
-unsigned int rte_get_master_lcore(void)
+unsigned int rte_get_main_lcore(void)
 {
-	return rte_eal_get_configuration()->master_lcore;
+	return rte_eal_get_configuration()->main_lcore;
 }
 
 unsigned int rte_lcore_count(void)
@@ -93,7 +93,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
 	return cfg->lcore_role[lcore_id] == ROLE_RTE;
 }
 
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
 {
 	i++;
 	if (wrap)
@@ -101,7 +101,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
 
 	while (i < RTE_MAX_LCORE) {
 		if (!rte_lcore_is_enabled(i) ||
-		    (skip_master && (i == rte_get_master_lcore()))) {
+		    (skip_main && (i == rte_get_main_lcore()))) {
 			i++;
 			if (wrap)
 				i %= RTE_MAX_LCORE;
diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index a5426e12346a..d221886eb22c 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -81,6 +81,7 @@ eal_long_options[] = {
 	{OPT_TRACE_BUF_SIZE,    1, NULL, OPT_TRACE_BUF_SIZE_NUM   },
 	{OPT_TRACE_MODE,        1, NULL, OPT_TRACE_MODE_NUM       },
 	{OPT_MASTER_LCORE,      1, NULL, OPT_MASTER_LCORE_NUM     },
+	{OPT_MAIN_LCORE,        1, NULL, OPT_MAIN_LCORE_NUM       },
 	{OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM},
 	{OPT_NO_HPET,           0, NULL, OPT_NO_HPET_NUM          },
 	{OPT_NO_HUGE,           0, NULL, OPT_NO_HUGE_NUM          },
@@ -144,7 +145,7 @@ struct device_option {
 static struct device_option_list devopt_list =
 TAILQ_HEAD_INITIALIZER(devopt_list);
 
-static int master_lcore_parsed;
+static int main_lcore_parsed;
 static int mem_parsed;
 static int core_parsed;
 
@@ -575,12 +576,12 @@ eal_parse_service_coremask(const char *coremask)
 		for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE;
 				j++, idx++) {
 			if ((1 << j) & val) {
-				/* handle master lcore already parsed */
+				/* handle main lcore already parsed */
 				uint32_t lcore = idx;
-				if (master_lcore_parsed &&
-						cfg->master_lcore == lcore) {
+				if (main_lcore_parsed &&
+						cfg->main_lcore == lcore) {
 					RTE_LOG(ERR, EAL,
-						"lcore %u is master lcore, cannot use as service core\n",
+						"lcore %u is main lcore, cannot use as service core\n",
 						idx);
 					return -1;
 				}
@@ -748,12 +749,12 @@ eal_parse_service_corelist(const char *corelist)
 				min = idx;
 			for (idx = min; idx <= max; idx++) {
 				if (cfg->lcore_role[idx] != ROLE_SERVICE) {
-					/* handle master lcore already parsed */
+					/* handle main lcore already parsed */
 					uint32_t lcore = idx;
-					if (cfg->master_lcore == lcore &&
-							master_lcore_parsed) {
+					if (cfg->main_lcore == lcore &&
+							main_lcore_parsed) {
 						RTE_LOG(ERR, EAL,
-							"Error: lcore %u is master lcore, cannot use as service core\n",
+							"Error: lcore %u is main lcore, cannot use as service core\n",
 							idx);
 						return -1;
 					}
@@ -836,25 +837,25 @@ eal_parse_corelist(const char *corelist, int *cores)
 	return 0;
 }
 
-/* Changes the lcore id of the master thread */
+/* Changes the lcore id of the main thread */
 static int
-eal_parse_master_lcore(const char *arg)
+eal_parse_main_lcore(const char *arg)
 {
 	char *parsing_end;
 	struct rte_config *cfg = rte_eal_get_configuration();
 
 	errno = 0;
-	cfg->master_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
+	cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
 	if (errno || parsing_end[0] != 0)
 		return -1;
-	if (cfg->master_lcore >= RTE_MAX_LCORE)
+	if (cfg->main_lcore >= RTE_MAX_LCORE)
 		return -1;
-	master_lcore_parsed = 1;
+	main_lcore_parsed = 1;
 
-	/* ensure master core is not used as service core */
-	if (lcore_config[cfg->master_lcore].core_role == ROLE_SERVICE) {
+	/* ensure main core is not used as service core */
+	if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
 		RTE_LOG(ERR, EAL,
-			"Error: Master lcore is used as a service core\n");
+			"Error: Main lcore is used as a service core\n");
 		return -1;
 	}
 
@@ -1593,9 +1594,14 @@ eal_parse_common_option(int opt, const char *optarg,
 		break;
 
 	case OPT_MASTER_LCORE_NUM:
-		if (eal_parse_master_lcore(optarg) < 0) {
+		fprintf(stderr,
+			"Option --" OPT_MASTER_LCORE
+			" is deprecated use " OPT_MAIN_LCORE "\n");
+		/* fallthrough */
+	case OPT_MAIN_LCORE_NUM:
+		if (eal_parse_main_lcore(optarg) < 0) {
 			RTE_LOG(ERR, EAL, "invalid parameter for --"
-					OPT_MASTER_LCORE "\n");
+					OPT_MAIN_LCORE "\n");
 			return -1;
 		}
 		break;
@@ -1763,9 +1769,9 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
 
 	RTE_CPU_AND(cpuset, cpuset, &default_set);
 
-	/* if no remaining cpu, use master lcore cpu affinity */
+	/* if no remaining cpu, use main lcore cpu affinity */
 	if (!CPU_COUNT(cpuset)) {
-		memcpy(cpuset, &lcore_config[rte_get_master_lcore()].cpuset,
+		memcpy(cpuset, &lcore_config[rte_get_main_lcore()].cpuset,
 			sizeof(*cpuset));
 	}
 }
@@ -1797,12 +1803,12 @@ eal_adjust_config(struct internal_config *internal_cfg)
 	if (internal_conf->process_type == RTE_PROC_AUTO)
 		internal_conf->process_type = eal_proc_type_detect();
 
-	/* default master lcore is the first one */
-	if (!master_lcore_parsed) {
-		cfg->master_lcore = rte_get_next_lcore(-1, 0, 0);
-		if (cfg->master_lcore >= RTE_MAX_LCORE)
+	/* default main lcore is the first one */
+	if (!main_lcore_parsed) {
+		cfg->main_lcore = rte_get_next_lcore(-1, 0, 0);
+		if (cfg->main_lcore >= RTE_MAX_LCORE)
 			return -1;
-		lcore_config[cfg->master_lcore].core_role = ROLE_RTE;
+		lcore_config[cfg->main_lcore].core_role = ROLE_RTE;
 	}
 
 	compute_ctrl_threads_cpuset(internal_cfg);
@@ -1822,8 +1828,8 @@ eal_check_common_options(struct internal_config *internal_cfg)
 	const struct internal_config *internal_conf =
 		eal_get_internal_configuration();
 
-	if (cfg->lcore_role[cfg->master_lcore] != ROLE_RTE) {
-		RTE_LOG(ERR, EAL, "Master lcore is not enabled for DPDK\n");
+	if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+		RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n");
 		return -1;
 	}
 
@@ -1921,7 +1927,7 @@ eal_common_usage(void)
 	       "                      '( )' can be omitted for single element group,\n"
 	       "                      '@' can be omitted if cpus and lcores have the same value\n"
 	       "  -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n"
-	       "  --"OPT_MASTER_LCORE" ID   Core ID that is used as master\n"
+	       "  --"OPT_MAIN_LCORE" ID     Core ID that is used as main\n"
 	       "  --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n"
 	       "  -n CHANNELS         Number of memory channels\n"
 	       "  -m MB               Memory to allocate (see also --"OPT_SOCKET_MEM")\n"
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index 89769d48b487..d363228a7a25 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -43,6 +43,8 @@ enum {
 	OPT_TRACE_BUF_SIZE_NUM,
 #define OPT_TRACE_MODE        "trace-mode"
 	OPT_TRACE_MODE_NUM,
+#define OPT_MAIN_LCORE        "main-lcore"
+	OPT_MAIN_LCORE_NUM,
 #define OPT_MASTER_LCORE      "master-lcore"
 	OPT_MASTER_LCORE_NUM,
 #define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name"
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index a6a6381567f4..4684c4c7df19 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -20,8 +20,8 @@
  */
 struct lcore_config {
 	pthread_t thread_id;       /**< pthread identifier */
-	int pipe_master2slave[2];  /**< communication pipe with master */
-	int pipe_slave2master[2];  /**< communication pipe with master */
+	int pipe_main2worker[2];   /**< communication pipe with main */
+	int pipe_worker2main[2];   /**< communication pipe with main */
 
 	lcore_function_t * volatile f; /**< function to call */
 	void * volatile arg;       /**< argument of function */
@@ -42,7 +42,7 @@ extern struct lcore_config lcore_config[RTE_MAX_LCORE];
  * The global RTE configuration structure.
  */
 struct rte_config {
-	uint32_t master_lcore;       /**< Id of the master lcore */
+	uint32_t main_lcore;         /**< Id of the main lcore */
 	uint32_t lcore_count;        /**< Number of available logical cores. */
 	uint32_t numa_node_count;    /**< Number of detected NUMA nodes. */
 	uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
diff --git a/lib/librte_eal/common/rte_random.c b/lib/librte_eal/common/rte_random.c
index b2c5416b331d..ce21c2242a22 100644
--- a/lib/librte_eal/common/rte_random.c
+++ b/lib/librte_eal/common/rte_random.c
@@ -122,7 +122,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
 	lcore_id = rte_lcore_id();
 
 	if (unlikely(lcore_id == LCORE_ID_ANY))
-		lcore_id = rte_get_master_lcore();
+		lcore_id = rte_get_main_lcore();
 
 	return &rand_states[lcore_id];
 }
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index 98565bbef340..6c955d319ad4 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -107,7 +107,7 @@ rte_service_init(void)
 	struct rte_config *cfg = rte_eal_get_configuration();
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
 		if (lcore_config[i].core_role == ROLE_SERVICE) {
-			if ((unsigned int)i == cfg->master_lcore)
+			if ((unsigned int)i == cfg->main_lcore)
 				continue;
 			rte_service_lcore_add(i);
 			count++;
diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
index ccea60afe77b..d6ea02375025 100644
--- a/lib/librte_eal/freebsd/eal.c
+++ b/lib/librte_eal/freebsd/eal.c
@@ -625,10 +625,10 @@ eal_check_mem_on_local_socket(void)
 	int socket_id;
 	const struct rte_config *config = rte_eal_get_configuration();
 
-	socket_id = rte_lcore_to_socket_id(config->master_lcore);
+	socket_id = rte_lcore_to_socket_id(config->main_lcore);
 
 	if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
-		RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+		RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
 }
 
 
@@ -851,29 +851,29 @@ rte_eal_init(int argc, char **argv)
 	eal_check_mem_on_local_socket();
 
 	if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
-			&lcore_config[config->master_lcore].cpuset) != 0) {
+			&lcore_config[config->main_lcore].cpuset) != 0) {
 		rte_eal_init_alert("Cannot set affinity");
 		rte_errno = EINVAL;
 		return -1;
 	}
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
 
-	RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
-		config->master_lcore, thread_id, cpuset,
+	RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
+		config->main_lcore, thread_id, cpuset,
 		ret == 0 ? "" : "...");
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (pipe(lcore_config[i].pipe_master2slave) < 0)
+		if (pipe(lcore_config[i].pipe_main2worker) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (pipe(lcore_config[i].pipe_slave2master) < 0)
+		if (pipe(lcore_config[i].pipe_worker2main) < 0)
 			rte_panic("Cannot create pipe\n");
 
 		lcore_config[i].state = WAIT;
@@ -886,7 +886,7 @@ rte_eal_init(int argc, char **argv)
 
 		/* Set thread_name for aid in debugging. */
 		snprintf(thread_name, sizeof(thread_name),
-				"lcore-slave-%d", i);
+				"lcore-worker-%d", i);
 		rte_thread_setname(lcore_config[i].thread_id, thread_name);
 
 		ret = pthread_setaffinity_np(lcore_config[i].thread_id,
@@ -896,10 +896,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 
 	/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/freebsd/eal_thread.c b/lib/librte_eal/freebsd/eal_thread.c
index 99b5fefc4c5b..1dce9b04f24a 100644
--- a/lib/librte_eal/freebsd/eal_thread.c
+++ b/lib/librte_eal/freebsd/eal_thread.c
@@ -26,35 +26,35 @@
 #include "eal_thread.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 	int rc = -EBUSY;
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		goto finish;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = write(m2s, &c, 1);
+		n = write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = read(s2m, &c, 1);
+		n = read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
 
 	rc = 0;
 finish:
-	rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+	rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
 	return rc;
 }
 
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
 	int n, ret;
 	unsigned lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
 
 		/* wait command */
 		do {
-			n = read(m2s, &c, 1);
+			n = read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = write(s2m, &c, 1);
+			n = write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index e3c2ef185eed..0ae12cf4fbac 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -65,11 +65,11 @@ int rte_eal_iopl_init(void);
 /**
  * Initialize the Environment Abstraction Layer (EAL).
  *
- * This function is to be executed on the MASTER lcore only, as soon
+ * This function is to be executed on the MAIN lcore only, as soon
  * as possible in the application's main() function.
  *
  * The function finishes the initialization process before main() is called.
- * It puts the SLAVE lcores in the WAIT state.
+ * It puts the WORKER lcores in the WAIT state.
  *
  * When the multi-partition feature is supported, depending on the
  * configuration (if CONFIG_RTE_EAL_MAIN_PARTITION is disabled), this
diff --git a/lib/librte_eal/include/rte_eal_trace.h b/lib/librte_eal/include/rte_eal_trace.h
index 19df549d29be..495ae1ee1d61 100644
--- a/lib/librte_eal/include/rte_eal_trace.h
+++ b/lib/librte_eal/include/rte_eal_trace.h
@@ -264,10 +264,10 @@ RTE_TRACE_POINT(
 RTE_TRACE_POINT(
 	rte_eal_trace_thread_remote_launch,
 	RTE_TRACE_POINT_ARGS(int (*f)(void *), void *arg,
-		unsigned int slave_id, int rc),
+		unsigned int worker_id, int rc),
 	rte_trace_point_emit_ptr(f);
 	rte_trace_point_emit_ptr(arg);
-	rte_trace_point_emit_u32(slave_id);
+	rte_trace_point_emit_u32(worker_id);
 	rte_trace_point_emit_int(rc);
 )
 RTE_TRACE_POINT(
diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h
index 06a671752ace..22a901ce62f6 100644
--- a/lib/librte_eal/include/rte_launch.h
+++ b/lib/librte_eal/include/rte_launch.h
@@ -32,12 +32,12 @@ typedef int (lcore_function_t)(void *);
 /**
  * Launch a function on another lcore.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * Sends a message to a slave lcore (identified by the slave_id) that
+ * Sends a message to a worker lcore (identified by the worker_id) that
  * is in the WAIT state (this is true after the first call to
  * rte_eal_init()). This can be checked by first calling
- * rte_eal_wait_lcore(slave_id).
+ * rte_eal_wait_lcore(worker_id).
  *
  * When the remote lcore receives the message, it switches to
  * the RUNNING state, then calls the function f with argument arg. Once the
@@ -45,7 +45,7 @@ typedef int (lcore_function_t)(void *);
  * the return value of f is stored in a local variable to be read using
  * rte_eal_wait_lcore().
  *
- * The MASTER lcore returns as soon as the message is sent and knows
+ * The MAIN lcore returns as soon as the message is sent and knows
  * nothing about the completion of f.
  *
  * Note: This function is not designed to offer optimum
@@ -56,37 +56,41 @@ typedef int (lcore_function_t)(void *);
  *   The function to be called.
  * @param arg
  *   The argument for the function.
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore on which the function should be executed.
  * @return
  *   - 0: Success. Execution of function f started on the remote lcore.
  *   - (-EBUSY): The remote lcore is not in a WAIT state.
  */
-int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id);
+int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned worker_id);
 
 /**
- * This enum indicates whether the master core must execute the handler
+ * This enum indicates whether the main core must execute the handler
  * launched on all logical cores.
  */
-enum rte_rmt_call_master_t {
-	SKIP_MASTER = 0, /**< lcore handler not executed by master core. */
-	CALL_MASTER,     /**< lcore handler executed by master core. */
+enum rte_rmt_call_main_t {
+	SKIP_MAIN = 0, /**< lcore handler not executed by main core. */
+	CALL_MAIN,     /**< lcore handler executed by main core. */
 };
 
+/* These legacy definitions will be removed in future release */
+#define SKIP_MASTER	RTE_DEPRECATED(SKIP_MASTER) SKIP_MAIN
+#define CALL_MASTER	RTE_DEPRECATED(CALL_MASTER) CALL_MAIN
+
 /**
  * Launch a function on all lcores.
  *
- * Check that each SLAVE lcore is in a WAIT state, then call
+ * Check that each WORKER lcore is in a WAIT state, then call
  * rte_eal_remote_launch() for each lcore.
  *
  * @param f
  *   The function to be called.
  * @param arg
  *   The argument for the function.
- * @param call_master
- *   If call_master set to SKIP_MASTER, the MASTER lcore does not call
- *   the function. If call_master is set to CALL_MASTER, the function
- *   is also called on master before returning. In any case, the master
+ * @param call_main
+ *   If call_main set to SKIP_MAIN, the MAIN lcore does not call
+ *   the function. If call_main is set to CALL_MAIN, the function
+ *   is also called on main before returning. In any case, the main
  *   lcore returns as soon as it finished its job and knows nothing
  *   about the completion of f on the other lcores.
  * @return
@@ -95,49 +99,49 @@ enum rte_rmt_call_master_t {
  *     case, no message is sent to any of the lcores.
  */
 int rte_eal_mp_remote_launch(lcore_function_t *f, void *arg,
-			     enum rte_rmt_call_master_t call_master);
+			     enum rte_rmt_call_main_t call_main);
 
 /**
- * Get the state of the lcore identified by slave_id.
+ * Get the state of the lcore identified by worker_id.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore.
  * @return
  *   The state of the lcore.
  */
-enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned slave_id);
+enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned int worker_id);
 
 /**
  * Wait until an lcore finishes its job.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * If the slave lcore identified by the slave_id is in a FINISHED state,
+ * If the worker lcore identified by the worker_id is in a FINISHED state,
  * switch to the WAIT state. If the lcore is in RUNNING state, wait until
  * the lcore finishes its job and moves to the FINISHED state.
  *
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore.
  * @return
- *   - 0: If the lcore identified by the slave_id is in a WAIT state.
+ *   - 0: If the lcore identified by the worker_id is in a WAIT state.
  *   - The value that was returned by the previous remote launch
- *     function call if the lcore identified by the slave_id was in a
+ *     function call if the lcore identified by the worker_id was in a
  *     FINISHED or RUNNING state. In this case, it changes the state
  *     of the lcore to WAIT.
  */
-int rte_eal_wait_lcore(unsigned slave_id);
+int rte_eal_wait_lcore(unsigned worker_id);
 
 /**
  * Wait until all lcores finish their jobs.
  *
- * To be executed on the MASTER lcore only. Issue an
+ * To be executed on the MAIN lcore only. Issue an
  * rte_eal_wait_lcore() for every lcore. The return values are
  * ignored.
  *
  * After a call to rte_eal_mp_wait_lcore(), the caller can assume
- * that all slave lcores are in a WAIT state.
+ * that all worker lcores are in a WAIT state.
  */
 void rte_eal_mp_wait_lcore(void);
 
diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h
index b8b64a625200..48b87e253afa 100644
--- a/lib/librte_eal/include/rte_lcore.h
+++ b/lib/librte_eal/include/rte_lcore.h
@@ -78,12 +78,24 @@ rte_lcore_id(void)
 }
 
 /**
- * Get the id of the master lcore
+ * Get the id of the main lcore
  *
  * @return
- *   the id of the master lcore
+ *   the id of the main lcore
  */
-unsigned int rte_get_master_lcore(void);
+unsigned int rte_get_main_lcore(void);
+
+/**
+ * Deprecated function the id of the main lcore
+ *
+ * @return
+ *   the id of the main lcore
+ */
+__rte_deprecated
+static inline unsigned int rte_get_master_lcore(void)
+{
+	return rte_get_main_lcore();
+}
 
 /**
  * Return the number of execution units (lcores) on the system.
@@ -203,32 +215,35 @@ int rte_lcore_is_enabled(unsigned int lcore_id);
  *
  * @param i
  *   The current lcore (reference).
- * @param skip_master
- *   If true, do not return the ID of the master lcore.
+ * @param skip_main
+ *   If true, do not return the ID of the main lcore.
  * @param wrap
  *   If true, go back to 0 when RTE_MAX_LCORE is reached; otherwise,
  *   return RTE_MAX_LCORE.
  * @return
  *   The next lcore_id or RTE_MAX_LCORE if not found.
  */
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap);
 
 /**
  * Macro to browse all running lcores.
  */
 #define RTE_LCORE_FOREACH(i)						\
 	for (i = rte_get_next_lcore(-1, 0, 0);				\
-	     i<RTE_MAX_LCORE;						\
+	     i < RTE_MAX_LCORE;						\
 	     i = rte_get_next_lcore(i, 0, 0))
 
 /**
- * Macro to browse all running lcores except the master lcore.
+ * Macro to browse all running lcores except the main lcore.
  */
-#define RTE_LCORE_FOREACH_SLAVE(i)					\
+#define RTE_LCORE_FOREACH_WORKER(i)					\
 	for (i = rte_get_next_lcore(-1, 1, 0);				\
-	     i<RTE_MAX_LCORE;						\
+	     i < RTE_MAX_LCORE;						\
 	     i = rte_get_next_lcore(i, 1, 0))
 
+#define RTE_LCORE_FOREACH_SLAVE(l)					\
+	RTE_DEPRECATED(RTE_LCORE_FOREACH_SLAVE) RTE_LCORE_FOREACH_WORKER(l)
+
 /**
  * Callback prototype for initializing lcores.
  *
diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
index 9cf0e2ec0137..1c9dd8db1e6a 100644
--- a/lib/librte_eal/linux/eal.c
+++ b/lib/librte_eal/linux/eal.c
@@ -883,10 +883,10 @@ eal_check_mem_on_local_socket(void)
 	int socket_id;
 	const struct rte_config *config = rte_eal_get_configuration();
 
-	socket_id = rte_lcore_to_socket_id(config->master_lcore);
+	socket_id = rte_lcore_to_socket_id(config->main_lcore);
 
 	if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
-		RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+		RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
 }
 
 static int
@@ -1215,28 +1215,28 @@ rte_eal_init(int argc, char **argv)
 	eal_check_mem_on_local_socket();
 
 	if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
-			&lcore_config[config->master_lcore].cpuset) != 0) {
+			&lcore_config[config->main_lcore].cpuset) != 0) {
 		rte_eal_init_alert("Cannot set affinity");
 		rte_errno = EINVAL;
 		return -1;
 	}
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
-	RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
-		config->master_lcore, (uintptr_t)thread_id, cpuset,
+	RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+		config->main_lcore, (uintptr_t)thread_id, cpuset,
 		ret == 0 ? "" : "...");
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (pipe(lcore_config[i].pipe_master2slave) < 0)
+		if (pipe(lcore_config[i].pipe_main2worker) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (pipe(lcore_config[i].pipe_slave2master) < 0)
+		if (pipe(lcore_config[i].pipe_worker2main) < 0)
 			rte_panic("Cannot create pipe\n");
 
 		lcore_config[i].state = WAIT;
@@ -1249,7 +1249,7 @@ rte_eal_init(int argc, char **argv)
 
 		/* Set thread_name for aid in debugging. */
 		snprintf(thread_name, sizeof(thread_name),
-			"lcore-slave-%d", i);
+			"lcore-worker-%d", i);
 		ret = rte_thread_setname(lcore_config[i].thread_id,
 						thread_name);
 		if (ret != 0)
@@ -1263,10 +1263,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 
 	/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 89725291b0ce..3e47efe58212 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -1737,7 +1737,7 @@ memseg_primary_init_32(void)
 	/* the allocation logic is a little bit convoluted, but here's how it
 	 * works, in a nutshell:
 	 *  - if user hasn't specified on which sockets to allocate memory via
-	 *    --socket-mem, we allocate all of our memory on master core socket.
+	 *    --socket-mem, we allocate all of our memory on main core socket.
 	 *  - if user has specified sockets to allocate memory on, there may be
 	 *    some "unused" memory left (e.g. if user has specified --socket-mem
 	 *    such that not all memory adds up to 2 gigabytes), so add it to all
@@ -1751,7 +1751,7 @@ memseg_primary_init_32(void)
 	for (i = 0; i < rte_socket_count(); i++) {
 		int hp_sizes = (int) internal_conf->num_hugepage_sizes;
 		uint64_t max_socket_mem, cur_socket_mem;
-		unsigned int master_lcore_socket;
+		unsigned int main_lcore_socket;
 		struct rte_config *cfg = rte_eal_get_configuration();
 		bool skip;
 
@@ -1767,10 +1767,10 @@ memseg_primary_init_32(void)
 		skip = active_sockets != 0 &&
 				internal_conf->socket_mem[socket_id] == 0;
 		/* ...or if we didn't specifically request memory on *any*
-		 * socket, and this is not master lcore
+		 * socket, and this is not main lcore
 		 */
-		master_lcore_socket = rte_lcore_to_socket_id(cfg->master_lcore);
-		skip |= active_sockets == 0 && socket_id != master_lcore_socket;
+		main_lcore_socket = rte_lcore_to_socket_id(cfg->main_lcore);
+		skip |= active_sockets == 0 && socket_id != main_lcore_socket;
 
 		if (skip) {
 			RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n",
diff --git a/lib/librte_eal/linux/eal_thread.c b/lib/librte_eal/linux/eal_thread.c
index 068de2559555..83c2034b93d5 100644
--- a/lib/librte_eal/linux/eal_thread.c
+++ b/lib/librte_eal/linux/eal_thread.c
@@ -26,35 +26,35 @@
 #include "eal_thread.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned int worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 	int rc = -EBUSY;
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		goto finish;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = write(m2s, &c, 1);
+		n = write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = read(s2m, &c, 1);
+		n = read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
 
 	rc = 0;
 finish:
-	rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+	rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
 	return rc;
 }
 
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
 	int n, ret;
 	unsigned lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
 
 		/* wait command */
 		do {
-			n = read(m2s, &c, 1);
+			n = read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = write(s2m, &c, 1);
+			n = write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index a93dea9fe616..33ee2748ede0 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -74,7 +74,7 @@ DPDK_21 {
 	rte_free;
 	rte_get_hpet_cycles;
 	rte_get_hpet_hz;
-	rte_get_master_lcore;
+	rte_get_main_lcore;
 	rte_get_next_lcore;
 	rte_get_tsc_hz;
 	rte_hexdump;
diff --git a/lib/librte_eal/windows/eal.c b/lib/librte_eal/windows/eal.c
index bc48f27ab39a..cbca20956210 100644
--- a/lib/librte_eal/windows/eal.c
+++ b/lib/librte_eal/windows/eal.c
@@ -350,8 +350,8 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	bscan = rte_bus_scan();
 	if (bscan < 0) {
@@ -360,16 +360,16 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (_pipe(lcore_config[i].pipe_master2slave,
+		if (_pipe(lcore_config[i].pipe_main2worker,
 			sizeof(char), _O_BINARY) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (_pipe(lcore_config[i].pipe_slave2master,
+		if (_pipe(lcore_config[i].pipe_worker2main,
 			sizeof(char), _O_BINARY) < 0)
 			rte_panic("Cannot create pipe\n");
 
@@ -394,10 +394,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 	return fctret;
 }
diff --git a/lib/librte_eal/windows/eal_thread.c b/lib/librte_eal/windows/eal_thread.c
index 20889b6196c9..908e726d16cc 100644
--- a/lib/librte_eal/windows/eal_thread.c
+++ b/lib/librte_eal/windows/eal_thread.c
@@ -17,34 +17,34 @@
 #include "eal_windows.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id)
+rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		return -EBUSY;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = _write(m2s, &c, 1);
+		n = _write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = _read(s2m, &c, 1);
+		n = _read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -61,21 +61,21 @@ eal_thread_loop(void *arg __rte_unused)
 	int n, ret;
 	unsigned int lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -88,7 +88,7 @@ eal_thread_loop(void *arg __rte_unused)
 
 		/* wait command */
 		do {
-			n = _read(m2s, &c, 1);
+			n = _read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -99,7 +99,7 @@ eal_thread_loop(void *arg __rte_unused)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = _write(s2m, &c, 1);
+			n = _write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
-- 
2.27.0


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  2020-10-13 19:48  0%                       ` Michel Machado
@ 2020-10-14 13:10  0%                         ` Medvedkin, Vladimir
  2020-10-14 23:57  0%                           ` Honnappa Nagarahalli
  0 siblings, 1 reply; 200+ results
From: Medvedkin, Vladimir @ 2020-10-14 13:10 UTC (permalink / raw)
  To: Michel Machado, Kevin Traynor, Ruifeng Wang, Bruce Richardson,
	Cody Doucette, Andre Nathan, Qiaobin Fu
  Cc: dev, Honnappa Nagarahalli, nd



On 13/10/2020 20:48, Michel Machado wrote:
> On 10/13/20 3:06 PM, Medvedkin, Vladimir wrote:
>>
>>
>> On 13/10/2020 18:46, Michel Machado wrote:
>>> On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
>>>> Hi Michel,
>>>>
>>>> Could you please describe a condition when LPM gets inconsistent? As 
>>>> I can see if there is no free tbl8 it will return -ENOSPC.
>>>
>>>     Consider this simple example, we need to add the following two 
>>> prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If 
>>> the LPM table is out of tbl8s, the second prefix is not added and 
>>> Gatekeeper will make decisions in violation of the policy. The data 
>>> structure of the LPM table is consistent, but its content 
>>> inconsistent with the policy.
>>
>> Aha, thanks. So do I understand correctly that you need to add a set 
>> of routes atomically (either the entire set is installed or nothing)?
> 
>     Yes.
> 
>> If so, then I would suggest having 2 lpm and switching them atomically 
>> after a successful addition. As for now, even if you have enough 
>> tbl8's, routes are installed non atomically, i.e. there will be a time 
>> gap between adding two routes, so in this time interval the table will 
>> be inconsistent with the policy.
>> Also, if new lpm algorithms are added to the DPDK, they won't have 
>> such a thing as tbl8.
> 
>     Our code already deals with synchronization.

OK, so my suggestion here would be to add new routes to the shadow copy 
of the lpm, and if it returns -ENOSPC, than create a new LPM with double 
amount of tbl8's and add all the routes to it. Then switch the 
active-shadow LPM pointers. In this case you'll always add a bulk of 
routes atomically.

> 
>>>     We minimize the need of replacing a LPM table by allocating LPM 
>>> tables with the double of what we need (see example here 
>>> https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183), 
>>> but the code must be ready for unexpected needs that may arise in 
>>> production.
>>>
>>
>> Usually, the table is initialized with a large enough number of 
>> entries, enough to add a possible number of routes. One tbl8 group 
>> takes up 1Kb of memory which is nothing comparing to the size of tbl24 
>> which is 64Mb.
> 
>     When the prefixes come from BGP, initializing a large enough table 
> is fine. But when prefixes come from threat intelligence, the number of 
> prefixes can vary wildly and the number of prefixes above 24 bits are 
> way more common.
> 
>> P.S. consider using rte_fib library, it has a number of advantages 
>> over LPM. You can replace the loop in __lookup_fib_bulk() with a bulk 
>> lookup call and this will probably increase the speed.
> 
>     I'm not aware of the rte_fib library. The only documentation that I 
> found on Google was https://doc.dpdk.org/api/rte__fib_8h.html and it 
> just says "FIB (Forwarding information base) implementation for IPv4 
> Longest Prefix Match".

That's true, I'm going to add programmer's guide soon.
Although the fib API is very similar to existing LPM.

> 
>>>>
>>>> On 13/10/2020 15:58, Michel Machado wrote:
>>>>> Hi Kevin,
>>>>>
>>>>>     We do need fields max_rules and number_tbl8s of struct rte_lpm, 
>>>>> so the removal would force us to have another patch to our local 
>>>>> copy of DPDK. We'd rather avoid this new local patch because we 
>>>>> wish to eventually be in sync with the stock DPDK.
>>>>>
>>>>>     Those fields are needed in Gatekeeper because we found a 
>>>>> condition in an ongoing deployment in which the entries of some LPM 
>>>>> tables may suddenly change a lot to reflect policy changes. To 
>>>>> avoid getting into a state in which the LPM table is inconsistent 
>>>>> because it cannot fit all the new entries, we compute the needed 
>>>>> parameters to support the new entries, and compare with the current 
>>>>> parameters. If the current table doesn't fit everything, we have to 
>>>>> replace it with a new LPM table.
>>>>>
>>>>>     If there were a way to obtain the struct rte_lpm_config of a 
>>>>> given LPM table, it would cleanly address our need. We have the 
>>>>> same need in IPv6 and have a local patch to work around it (see 
>>>>> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f). 
>>>>> Thus, an IPv4 and IPv6 solution would be best.
>>>>>
>>>>>     PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to 
>>>>> this disscussion.
>>>>>
>>>>> [ ]'s
>>>>> Michel Machado
>>>>>
>>>>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>>>>>> Hi Gatekeeper maintainers (I think),
>>>>>>
>>>>>> fyi - there is a proposal to remove some members of a struct in 
>>>>>> DPDK LPM
>>>>>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 
>>>>>> but
>>>>>> as it's an LTS I guess it would probably hit Debian in a few months.
>>>>>>
>>>>>> The full thread is here:
>>>>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/ 
>>>>>>
>>>>>>
>>>>>> Maybe you can take a look and tell us if they are needed in 
>>>>>> Gatekeeper
>>>>>> or you can workaround it?
>>>>>>
>>>>>> thanks,
>>>>>> Kevin.
>>>>>>
>>>>>> [1]
>>>>>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248 
>>>>>>
>>>>>>
>>>>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>>>>>> <bruce.richardson@intel.com>
>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>>>>>
>>>>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>>>>>
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>>>>>> <Ruifeng.Wang@arm.com>
>>>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>>>>>
>>>>>>>>>> Hi Ruifeng,
>>>>>>>>>>
>>>>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no 
>>>>>>>>>>>> need to
>>>>>>>>>>>> be exposed to the user.
>>>>>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>>>>>> maintainability.
>>>>>>>>>>>>
>>>>>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>>>>>> ---
>>>>>>>>>>>>    lib/librte_lpm/rte_lpm.c | 152
>>>>>>>>>>>> +++++++++++++++++++++++---------------
>>>>>>>>>> -
>>>>>>>>>>>>    lib/librte_lpm/rte_lpm.h |   7 --
>>>>>>>>>>>>    2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>>>>>
>>>>>>>>>>> <snip>
>>>>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h 
>>>>>>>>>>>> b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>>>>>
>>>>>>>>>>>>    /** @internal LPM structure. */
>>>>>>>>>>>>    struct rte_lpm {
>>>>>>>>>>>> -    /* LPM metadata. */
>>>>>>>>>>>> -    char name[RTE_LPM_NAMESIZE];        /**< Name of the 
>>>>>>>>>>>> lpm. */
>>>>>>>>>>>> -    uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>>>>>> -    uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>>>>>> -    struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; 
>>>>>>>>>>>> /**<
>>>>>>>>>> Rule info table. */
>>>>>>>>>>>> -
>>>>>>>>>>>>        /* LPM Tables. */
>>>>>>>>>>>>        struct rte_lpm_tbl_entry 
>>>>>>>>>>>> tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>>>>>                __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>>>>>        struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>>>>>> -    struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>>>>>    };
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>>>>>
>>>>>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>>>>>> different, and that return value could be used by 
>>>>>>>>>>> rte_lpm_lookup()
>>>>>>>>>>> which as a static inline function will be in the binary and 
>>>>>>>>>>> using
>>>>>>>>>>> the old structure offsets.]
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>>>>>> without prior notice.
>>>>>>>>>>
>>>>>>>>> So if the change wants to happen in 20.11, a deprecation notice 
>>>>>>>>> should
>>>>>>>>> have been added in 20.08.
>>>>>>>>> I should have added a deprecation notice. This change will have 
>>>>>>>>> to wait for
>>>>>>>> next ABI update window.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Do you plan to extend? or is this just speculative?
>>>>>>> It is speculative.
>>>>>>>
>>>>>>>>
>>>>>>>> A quick scan and there seems to be several projects using some 
>>>>>>>> of these
>>>>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>>>>>> gatekeeper. I didn't look at the details to see if they are 
>>>>>>>> really needed.
>>>>>>>>
>>>>>>>> Not sure how much notice they'd need or if they update DPDK 
>>>>>>>> much, but I
>>>>>>>> think it's worth having a closer look as to how they use lpm and 
>>>>>>>> what the
>>>>>>>> impact to them is.
>>>>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't 
>>>>>>> access the members to be hided.
>>>>>>> They will not be impacted by this patch.
>>>>>>> But Gatekeeper accesses the rte_lpm internal members that to be 
>>>>>>> hided. Its compilation will be broken with this patch.
>>>>>>>
>>>>>>>>
>>>>>>>>> Thanks.
>>>>>>>>> Ruifeng
>>>>>>>>>>>>    /** LPM RCU QSBR configuration structure. */
>>>>>>>>>>>> -- 
>>>>>>>>>>>> 2.17.1
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> -- 
>>>>>>>>>> Regards,
>>>>>>>>>> Vladimir
>>>>>>>
>>>>>>
>>>>
>>

-- 
Regards,
Vladimir

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v7 4/4] doc: test-meson-builds.sh doc updates
  2020-10-14 10:41 10%     ` [dpdk-dev] [PATCH v7 " Conor Walsh
                         ` (2 preceding siblings ...)
  2020-10-14 10:41 15%       ` [dpdk-dev] [PATCH v7 3/4] devtools: change not found to warning check-abi.sh Conor Walsh
@ 2020-10-14 10:41 18%       ` Conor Walsh
  3 siblings, 0 replies; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

Updates to the Checking Compilation and Checking ABI compatibility
sections of the patches part of the contribution guide

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 doc/guides/contributing/patches.rst | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 9ff60944c..e11d63bb0 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -470,10 +470,9 @@ The script internally checks for dependencies, then builds for several
 combinations of compilation configuration.
 By default, each build will be put in a subfolder of the current working directory.
 However, if it is preferred to place the builds in a different location,
-the environment variable ``DPDK_BUILD_TEST_DIR`` can be set to that desired location.
-For example, setting ``DPDK_BUILD_TEST_DIR=__builds`` will put all builds
-in a single subfolder called "__builds" created in the current directory.
-Setting ``DPDK_BUILD_TEST_DIR`` to an absolute directory path e.g. ``/tmp`` is also supported.
+the environment variable ``DPDK_BUILD_TEST_DIR`` or the command line argument ``-b``
+can be set to that desired location.
+Environmental variables can also be specified in ``.config/dpdk/devel.config``.
 
 
 .. _integrated_abi_check:
@@ -483,14 +482,17 @@ Checking ABI compatibility
 
 By default, ABI compatibility checks are disabled.
 
-To enable them, a reference version must be selected via the environment
-variable ``DPDK_ABI_REF_VERSION``.
-
-The ``devtools/test-build.sh`` and ``devtools/test-meson-builds.sh`` scripts
-then build this reference version in a temporary directory and store the
-results in a subfolder of the current working directory.
-The environment variable ``DPDK_ABI_REF_DIR`` can be set so that the results go
-to a different location.
+To enable ABI checks the required reference version must be set using either the
+environment variable ``DPDK_ABI_REF_VERSION`` or the command line argument ``-a``.
+The tag ``latest`` is supported, which will select the latest quarterly release.
+e.g. ``./devtools/test-meson-builds.sh -a latest``.
+
+The ``devtools/test-meson-builds.sh`` script will then either build this reference version
+or download a cached version when available in a temporary directory and store the results
+in a subfolder of the current working directory.
+The environment variable ``DPDK_ABI_REF_DIR`` or the argument ``-d`` can be set so that
+the results go to a different location.
+Environmental variables can also be specified in ``.config/dpdk/devel.config``.
 
 
 Sending Patches
-- 
2.25.1


^ permalink raw reply	[relevance 18%]

* [dpdk-dev] [PATCH v7 3/4] devtools: change not found to warning check-abi.sh
  2020-10-14 10:41 10%     ` [dpdk-dev] [PATCH v7 " Conor Walsh
  2020-10-14 10:41 21%       ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
  2020-10-14 10:41 26%       ` [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
@ 2020-10-14 10:41 15%       ` Conor Walsh
  2020-10-14 10:41 18%       ` [dpdk-dev] [PATCH v7 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
  3 siblings, 0 replies; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

Change dump file not found from an error to a warning to make check-abi.sh
compatible with the changes to test-meson-builds.sh needed to use
prebuilt references.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 devtools/check-abi.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ab6748cfb..60d88777e 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -46,8 +46,7 @@ for dump in $(find $refdir -name "*.dump"); do
 	fi
 	dump2=$(find $newdir -name $name)
 	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
-		echo "Error: can't find $name in $newdir"
-		error=1
+		echo "WARNING: can't find $name in $newdir, are you building with all dependencies?"
 		continue
 	fi
 	abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
-- 
2.25.1


^ permalink raw reply	[relevance 15%]

* [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh
  2020-10-14 10:41 10%     ` [dpdk-dev] [PATCH v7 " Conor Walsh
  2020-10-14 10:41 21%       ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
@ 2020-10-14 10:41 26%       ` Conor Walsh
  2020-10-15 10:16  4%         ` Kinsella, Ray
  2020-10-14 10:41 15%       ` [dpdk-dev] [PATCH v7 3/4] devtools: change not found to warning check-abi.sh Conor Walsh
  2020-10-14 10:41 18%       ` [dpdk-dev] [PATCH v7 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
  3 siblings, 1 reply; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

The core reason for this patch is to reduce the amount of time needed to
run abi checks. The number of abi checks being run has been reduced to
only 2 (1 x86_64 and 1 arm). The script can now also take adavtage of
prebuilt abi references.

Invoke using "./test-meson-builds.sh [-b <build directory>]
   [-a <dpdk tag or latest for abi check>] [-u <uri for abi references>]
   [-d <directory for abi references>]"
 - <build directory>: directory to store builds (relative or absolute)
 - <dpdk tag or latest for abi check>: dpdk tag e.g. "v20.11" or "latest"
 - <uri for abi references>: http location or directory to get prebuilt
   abi references from
 - <directory for abi references>: directory to store abi references
   (relative or absolute)
e.g. "./test-meson-builds.sh -a latest"
If no flags are specified test-meson-builds.sh will run the standard
meson tests with default options unless environmental variables are
specified.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
---
 devtools/test-meson-builds.sh | 171 +++++++++++++++++++++++++++-------
 1 file changed, 139 insertions(+), 32 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index a87de635a..6b959eb63 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -1,12 +1,74 @@
 #! /bin/sh -e
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2018-2020 Intel Corporation
 
 # Run meson to auto-configure the various builds.
 # * all builds get put in a directory whose name starts with "build-"
 # * if a build-directory already exists we assume it was properly configured
 # Run ninja after configuration is done.
 
+# Get arguments
+usage()
+{
+	echo "Usage: $0
+	      [-b <build directory>]
+	      [-a <dpdk tag or latest for abi check>]
+	      [-u <uri for abi references>]
+	      [-d <directory for abi references>]" 1>&2; exit 1;
+}
+
+# Placeholder default uri
+DPDK_ABI_DEFAULT_URI="http://abi-ref.dpdk.org"
+
+while getopts "a:u:d:b:h" arg; do
+	case $arg in
+	a)
+		if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+			echo "DPDK_ABI_REF_VERSION and -a cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_VERSION=${OPTARG} ;;
+	u)
+		if [ -n "$DPDK_ABI_TAR_URI" ]; then
+			echo "DPDK_ABI_TAR_URI and -u cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_TAR_URI=${OPTARG} ;;
+	d)
+		if [ -n "$DPDK_ABI_REF_DIR" ]; then
+			echo "DPDK_ABI_REF_DIR and -d cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_DIR=${OPTARG} ;;
+	b)
+		if [ -n "$DPDK_BUILD_TEST_DIR" ]; then
+			echo "DPDK_BUILD_TEST_DIR and -a cannot both be set"
+			exit 1
+		fi
+		DPDK_BUILD_TEST_DIR=${OPTARG} ;;
+	h)
+		usage ;;
+	*)
+		usage ;;
+	esac
+done
+
+if [ -n "$DPDK_ABI_REF_VERSION" ] ; then
+	if [ "$DPDK_ABI_REF_VERSION" = "latest" ] ; then
+		DPDK_ABI_REF_VERSION=$(git ls-remote --tags http://dpdk.org/git/dpdk |
+	        	sed "s/.*\///" | grep -v "r\|{}" |
+			grep '^[^.]*.[^.]*$' | tail -n 1)
+	elif [ -z "$(git ls-remote http://dpdk.org/git/dpdk refs/tags/$DPDK_ABI_REF_VERSION)" ] ; then
+		echo "$DPDK_ABI_REF_VERSION is not a valid DPDK tag"
+		exit 1
+	fi
+fi
+if [ -z $DPDK_ABI_TAR_URI ] ; then
+	DPDK_ABI_TAR_URI=$DPDK_ABI_DEFAULT_URI
+fi
+# allow the generation script to override value with env var
+abi_checks_done=${DPDK_ABI_GEN_REF:-0}
+
 # set pipefail option if possible
 PIPEFAIL=""
 set -o | grep -q pipefail && set -o pipefail && PIPEFAIL=1
@@ -16,7 +78,11 @@ srcdir=$(dirname $(readlink -f $0))/..
 
 MESON=${MESON:-meson}
 use_shared="--default-library=shared"
-builds_dir=${DPDK_BUILD_TEST_DIR:-.}
+builds_dir=${DPDK_BUILD_TEST_DIR:-$srcdir/builds}
+# ensure path is absolute meson returns error when some paths are relative
+if echo "$builds_dir" | grep -qv '^/'; then
+        builds_dir=$srcdir/$builds_dir
+fi
 
 if command -v gmake >/dev/null 2>&1 ; then
 	MAKE=gmake
@@ -123,39 +189,49 @@ install_target () # <builddir> <installdir>
 	fi
 }
 
-build () # <directory> <target compiler | cross file> <meson options>
+abi_gen_check () # no options
 {
-	targetdir=$1
-	shift
-	crossfile=
-	[ -r $1 ] && crossfile=$1 || targetcc=$1
-	shift
-	# skip build if compiler not available
-	command -v ${CC##* } >/dev/null 2>&1 || return 0
-	if [ -n "$crossfile" ] ; then
-		cross="--cross-file $crossfile"
-		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
-			$crossfile | tr -d "'" | tr -d '"')
-	else
-		cross=
+	abirefdir=${DPDK_ABI_REF_DIR:-$builds_dir/__reference}/$DPDK_ABI_REF_VERSION
+	mkdir -p $abirefdir
+	# ensure path is absolute meson returns error when some are relative
+	if echo "$abirefdir" | grep -qv '^/'; then
+		abirefdir=$srcdir/$abirefdir
 	fi
-	load_env $targetcc || return 0
-	config $srcdir $builds_dir/$targetdir $cross --werror $*
-	compile $builds_dir/$targetdir
-	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
-		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
-		if [ ! -d $abirefdir/$targetdir ]; then
+	if [ ! -d $abirefdir/$targetdir ]; then
+
+		# try to get abi reference
+		if echo "$DPDK_ABI_TAR_URI" | grep -q '^http'; then
+			if [ $abi_checks_done -gt -1 ]; then
+				if curl --head --fail --silent \
+					"$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz" \
+					>/dev/null; then
+					curl -o $abirefdir/$targetdir.tar.gz \
+					$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz
+				fi
+			fi
+		elif [ $abi_checks_done -gt -1 ]; then
+			if [ -f "$DPDK_ABI_TAR_URI/$targetdir.tar.gz" ]; then
+				cp $DPDK_ABI_TAR_URI/$targetdir.tar.gz \
+					$abirefdir/
+			fi
+		fi
+		if [ -f "$abirefdir/$targetdir.tar.gz" ]; then
+			tar -xf $abirefdir/$targetdir.tar.gz \
+				-C $abirefdir >/dev/null
+			rm -rf $abirefdir/$targetdir.tar.gz
+		# if no reference can be found then generate one
+		else
 			# clone current sources
 			if [ ! -d $abirefdir/src ]; then
 				git clone --local --no-hardlinks \
-					--single-branch \
-					-b $DPDK_ABI_REF_VERSION \
-					$srcdir $abirefdir/src
+					  --single-branch \
+					  -b $DPDK_ABI_REF_VERSION \
+					  $srcdir $abirefdir/src
 			fi
 
 			rm -rf $abirefdir/build
 			config $abirefdir/src $abirefdir/build $cross \
-				-Dexamples= $*
+			       -Dexamples= $*
 			compile $abirefdir/build
 			install_target $abirefdir/build $abirefdir/$targetdir
 			$srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
@@ -164,17 +240,46 @@ build () # <directory> <target compiler | cross file> <meson options>
 			find $abirefdir/$targetdir/usr/local -name '*.a' -delete
 			rm -rf $abirefdir/$targetdir/usr/local/bin
 			rm -rf $abirefdir/$targetdir/usr/local/share
+			rm -rf $abirefdir/$targetdir/usr/local/lib
 		fi
+	fi
 
-		install_target $builds_dir/$targetdir \
-			$(readlink -f $builds_dir/$targetdir/install)
-		$srcdir/devtools/gen-abi.sh \
-			$(readlink -f $builds_dir/$targetdir/install)
+	install_target $builds_dir/$targetdir \
+		$(readlink -f $builds_dir/$targetdir/install)
+	$srcdir/devtools/gen-abi.sh \
+		$(readlink -f $builds_dir/$targetdir/install)
+	# check abi if not generating references
+	if [ -z $DPDK_ABI_GEN_REF ] ; then
 		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
 			$(readlink -f $builds_dir/$targetdir/install)
 	fi
 }
 
+build () # <directory> <target compiler | cross file> <meson options>
+{
+	targetdir=$1
+	shift
+	crossfile=
+	[ -r $1 ] && crossfile=$1 || targetcc=$1
+	shift
+	# skip build if compiler not available
+	command -v ${CC##* } >/dev/null 2>&1 || return 0
+	if [ -n "$crossfile" ] ; then
+		cross="--cross-file $crossfile"
+		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
+			$crossfile | tr -d "'" | tr -d '"')
+	else
+		cross=
+	fi
+	load_env $targetcc || return 0
+	config $srcdir $builds_dir/$targetdir $cross --werror $*
+	compile $builds_dir/$targetdir
+	if [ -n "$DPDK_ABI_REF_VERSION" ] && [ $abi_checks_done -lt 1 ] ; then
+		abi_gen_check
+		abi_checks_done=$((abi_checks_done+1))
+	fi
+}
+
 if [ "$1" = "-vv" ] ; then
 	TEST_MESON_BUILD_VERY_VERBOSE=1
 elif [ "$1" = "-v" ] ; then
@@ -189,7 +294,7 @@ fi
 # shared and static linked builds with gcc and clang
 for c in gcc clang ; do
 	command -v $c >/dev/null 2>&1 || continue
-	for s in static shared ; do
+	for s in shared static ; do
 		export CC="$CCACHE $c"
 		build build-$c-$s $c --default-library=$s
 		unset CC
@@ -211,6 +316,8 @@ build build-x86-mingw $srcdir/config/x86/cross-mingw -Dexamples=helloworld
 
 # generic armv8a with clang as host compiler
 f=$srcdir/config/arm/arm64_armv8_linux_gcc
+# run abi checks with 1 arm build
+abi_checks_done=$((abi_checks_done-1))
 export CC="clang"
 build build-arm64-host-clang $f $use_shared
 unset CC
@@ -231,7 +338,7 @@ done
 build_path=$(readlink -f $builds_dir/build-x86-default)
 export DESTDIR=$build_path/install
 # No need to reinstall if ABI checks are enabled
-if [ -z "$DPDK_ABI_REF_VERSION" ]; then
+if [ -z "$DPDK_ABI_REF_VERSION" ] ; then
 	install_target $build_path $DESTDIR
 fi
 
-- 
2.25.1


^ permalink raw reply	[relevance 26%]

* [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives
  2020-10-14 10:41 10%     ` [dpdk-dev] [PATCH v7 " Conor Walsh
@ 2020-10-14 10:41 21%       ` Conor Walsh
  2020-10-15 10:15  4%         ` Kinsella, Ray
  2020-10-14 10:41 26%       ` [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

This patch adds a script that generates compressed archives
containing .dump files which can be used to perform abi
breakage checking in test-meson-build.sh.

Invoke using "./gen-abi-tarballs.sh [-v <dpdk tag>]"
 - <dpdk tag>: dpdk tag e.g. "v20.11" or "latest"
e.g. "./gen-abi-tarballs.sh -v latest"

If no tag is specified, the script will default to "latest"
Using these parameters the script will produce several *.tar.gz
archives containing .dump files required to do abi breakage checking

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
---
 devtools/gen-abi-tarballs.sh | 48 ++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100755 devtools/gen-abi-tarballs.sh

diff --git a/devtools/gen-abi-tarballs.sh b/devtools/gen-abi-tarballs.sh
new file mode 100755
index 000000000..bcc1beac5
--- /dev/null
+++ b/devtools/gen-abi-tarballs.sh
@@ -0,0 +1,48 @@
+#! /bin/sh -e
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+# Generate the required prebuilt ABI references for test-meson-build.sh
+
+# Get arguments
+usage() { echo "Usage: $0 [-v <dpdk tag or latest>]" 1>&2; exit 1; }
+abi_tag=
+while getopts "v:h" arg; do
+	case $arg in
+	v)
+		if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+			echo "DPDK_ABI_REF_VERSION and -v cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_VERSION=${OPTARG} ;;
+	h)
+		usage ;;
+	*)
+		usage ;;
+	esac
+done
+
+if [ -z $DPDK_ABI_REF_VERSION ] ; then
+	DPDK_ABI_REF_VERSION="latest"
+fi
+
+srcdir=$(dirname $(readlink -f $0))/..
+
+DPDK_ABI_GEN_REF=-20
+DPDK_ABI_REF_DIR=$srcdir/__abitarballs
+
+. $srcdir/devtools/test-meson-builds.sh
+
+abirefdir=$DPDK_ABI_REF_DIR/$DPDK_ABI_REF_VERSION
+
+rm -rf $abirefdir/build-*.tar.gz
+cd $abirefdir
+for f in build-* ; do
+	tar -czf $f.tar.gz $f
+done
+cp *.tar.gz ../
+rm -rf *
+mv ../*.tar.gz .
+rm -rf build-x86-default.tar.gz
+
+echo "The references for $DPDK_ABI_REF_VERSION are now available in $abirefdir"
-- 
2.25.1


^ permalink raw reply	[relevance 21%]

* [dpdk-dev] [PATCH v7 0/4] devtools: abi breakage checks
  2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
                       ` (4 preceding siblings ...)
  2020-10-14  9:37  4%     ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Kinsella, Ray
@ 2020-10-14 10:41 10%     ` Conor Walsh
  2020-10-14 10:41 21%       ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
                         ` (3 more replies)
  5 siblings, 4 replies; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

This patchset introduces changes to test-meson-builds.sh, check-abi.sh and
adds a new script gen-abi-tarballs.sh. The changes to test-meson-builds.sh
include UX improvements such as adding command line arguments and allowing
the use of relative paths. Reduced the number of abi checks to just two,
one for both x86_64 and ARM, the references for these tests can now be
prebuilt and downloaded by test-meson-builds.sh, these changes will allow
the tests to run much faster. check-abi.sh is updated to use the prebuilt
references. gen-abi-tarballs.sh is a new script to generate the prebuilt
abi references used by test-meson-builds.sh, these compressed archives can
be retrieved from either a local directory or a remote http location.

---
v7: Changes resulting from list feedback

v6: Corrected a mistake in the doc patch

v5:
 - Patchset has been completely reworked following feedback
 - Patchset is now part of test-meson-builds.sh not the meson build
   system

v4:
 - Reworked both Python scripts to use more native Python functions
   and modules.
 - Python scripts are now in line with how other Python scripts in
   DPDK are structured.

v3:
 - Fix for bug which now allows meson < 0.48.0 to be used
 - Various coding style changes throughout
 - Minor bug fixes to the various meson.build files

v2: Spelling mistake, corrected spelling of environmental

Conor Walsh (4):
  devtools: add generation of compressed abi dump archives
  devtools: abi and UX changes for test-meson-builds.sh
  devtools: change dump file not found to warning in check-abi.sh
  doc: test-meson-builds.sh doc updates

 devtools/check-abi.sh               |   3 +-
 devtools/gen-abi-tarballs.sh        |  48 ++++++++
 devtools/test-meson-builds.sh       | 171 ++++++++++++++++++++++------
 doc/guides/contributing/patches.rst |  26 +++--
 4 files changed, 202 insertions(+), 46 deletions(-)
 create mode 100755 devtools/gen-abi-tarballs.sh

-- 
2.25.1


^ permalink raw reply	[relevance 10%]

* Re: [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks
  2020-10-14  9:37  4%     ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Kinsella, Ray
@ 2020-10-14 10:33  4%       ` Walsh, Conor
  0 siblings, 0 replies; 200+ results
From: Walsh, Conor @ 2020-10-14 10:33 UTC (permalink / raw)
  To: Kinsella, Ray, nhorman, Richardson, Bruce, thomas, david.marchand; +Cc: dev

Thanks for your feedback Ray,

V7 with your suggested changes for the patchset is on its way.

/Conor

> -----Original Message-----
> From: Kinsella, Ray <mdr@ashroe.eu>
> Sent: Wednesday 14 October 2020 10:37
> To: Walsh, Conor <conor.walsh@intel.com>; nhorman@tuxdriver.com;
> Richardson, Bruce <bruce.richardson@intel.com>; thomas@monjalon.net;
> david.marchand@redhat.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v6 0/4] devtools: abi breakage checks
> 
> 
> 
> On 12/10/2020 14:03, Conor Walsh wrote:
> > This patchset will help developers discover abi breakages more easily
> > before upstreaming their code. Currently checking that the DPDK ABI
> > has not changed before up-streaming code is not intuitive and the
> > process is time consuming. Currently contributors must use the
> > test-meson-builds.sh tool, alongside some environmental variables to
> > test their changes. Contributors in many cases are either unaware or
> > unable to do this themselves, leading to a potentially serious situation
> > where they are unknowingly up-streaming code that breaks the ABI. These
> > breakages are caught by Travis, but it would be more efficient if they
> > were caught locally before up-streaming.
> 
> I would remove everything in the git log text before this line...
> 
> > This patchset introduces changes
> > to test-meson-builds.sh, check-abi.sh and adds a new script
> > gen-abi-tarballs.sh. The changes to test-meson-builds.sh include UX
> 
> UX changes = improvements
> 
> > changes such as adding command line arguments and allowing the use of
> > relative paths. Reduced the number of abi checks to just two, one for both
> > x86_64 and ARM, the references for these tests can now be prebuilt and
> > downloaded by test-meson-builds.sh, these changes will allow the tests to
> > run much faster. check-abi.sh is updated to use the prebuilt references.
> > gen-abi-tarballs.sh is a new script to generate the prebuilt abi
> > references used by test-meson-builds.sh, these compressed archives can
> be
> > retrieved from either a local directory or a remote http location.
> >
> > ---
> > v6: Corrected a mistake in the doc patch
> >
> > v5:
> >  - Patchset has been completely reworked following feedback
> >  - Patchset is now part of test-meson-builds.sh not the meson build system
> >
> > v4:
> >  - Reworked both Python scripts to use more native Python functions
> >    and modules.
> >  - Python scripts are now in line with how other Python scripts in
> >    DPDK are structured.
> >
> > v3:
> >  - Fix for bug which now allows meson < 0.48.0 to be used
> >  - Various coding style changes throughout
> >  - Minor bug fixes to the various meson.build files
> >
> > v2: Spelling mistake, corrected spelling of environmental
> >
> > Conor Walsh (4):
> >   devtools: add generation of compressed abi dump archives
> >   devtools: abi and UX changes for test-meson-builds.sh
> >   devtools: change dump file not found to warning in check-abi.sh
> >   doc: test-meson-builds.sh doc updates
> >
> >  devtools/check-abi.sh               |   3 +-
> >  devtools/gen-abi-tarballs.sh        |  48 ++++++++
> >  devtools/test-meson-builds.sh       | 170 ++++++++++++++++++++++------
> >  doc/guides/contributing/patches.rst |  26 +++--
> >  4 files changed, 201 insertions(+), 46 deletions(-)
> >  create mode 100755 devtools/gen-abi-tarballs.sh
> >

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6 4/4] doc: test-meson-builds.sh doc updates
  2020-10-12 13:03 18%     ` [dpdk-dev] [PATCH v6 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
@ 2020-10-14  9:46  0%       ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14  9:46 UTC (permalink / raw)
  To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev



On 12/10/2020 14:03, Conor Walsh wrote:
> Updates to the Checking Compilation and Checking ABI compatibility
> sections of the patches part of the contribution guide
> 
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
> 
> ---
>  doc/guides/contributing/patches.rst | 26 ++++++++++++++------------
>  1 file changed, 14 insertions(+), 12 deletions(-)
> 
> diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
> index 9ff60944c..e11d63bb0 100644
> --- a/doc/guides/contributing/patches.rst
> +++ b/doc/guides/contributing/patches.rst
> @@ -470,10 +470,9 @@ The script internally checks for dependencies, then builds for several
>  combinations of compilation configuration.
>  By default, each build will be put in a subfolder of the current working directory.
>  However, if it is preferred to place the builds in a different location,
> -the environment variable ``DPDK_BUILD_TEST_DIR`` can be set to that desired location.
> -For example, setting ``DPDK_BUILD_TEST_DIR=__builds`` will put all builds
> -in a single subfolder called "__builds" created in the current directory.
> -Setting ``DPDK_BUILD_TEST_DIR`` to an absolute directory path e.g. ``/tmp`` is also supported.
> +the environment variable ``DPDK_BUILD_TEST_DIR`` or the command line argument ``-b``
> +can be set to that desired location.
> +Environmental variables can also be specified in ``.config/dpdk/devel.config``.
>  
>  
>  .. _integrated_abi_check:
> @@ -483,14 +482,17 @@ Checking ABI compatibility
>  
>  By default, ABI compatibility checks are disabled.
>  
> -To enable them, a reference version must be selected via the environment
> -variable ``DPDK_ABI_REF_VERSION``.
> -
> -The ``devtools/test-build.sh`` and ``devtools/test-meson-builds.sh`` scripts
> -then build this reference version in a temporary directory and store the
> -results in a subfolder of the current working directory.
> -The environment variable ``DPDK_ABI_REF_DIR`` can be set so that the results go
> -to a different location.
> +To enable ABI checks the required reference version must be set using either the
> +environment variable ``DPDK_ABI_REF_VERSION`` or the command line argument ``-a``.
> +The tag ``latest`` is supported, which will select the latest quarterly release.
> +e.g. ``./devtools/test-meson-builds.sh -a latest``.
> +
> +The ``devtools/test-meson-builds.sh`` script will then either build this reference version
> +or download a cached version when available in a temporary directory and store the results
> +in a subfolder of the current working directory.
> +The environment variable ``DPDK_ABI_REF_DIR`` or the argument ``-d`` can be set so that
> +the results go to a different location.
> +Environmental variables can also be specified in ``.config/dpdk/devel.config``.
>  
>  
>  Sending Patches
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 3/4] devtools: change dump file not found to warning in check-abi.sh
  2020-10-12 13:03 15%     ` [dpdk-dev] [PATCH v6 3/4] devtools: change dump file not found to warning in check-abi.sh Conor Walsh
@ 2020-10-14  9:44  4%       ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14  9:44 UTC (permalink / raw)
  To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev



On 12/10/2020 14:03, Conor Walsh wrote:
> Change dump file not found from an error to a warning to make check-abi.sh
> compatible with the changes to test-meson-builds.sh needed to use
> prebuilt references.
> 
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
> 
> ---
>  devtools/check-abi.sh | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
> index ab6748cfb..60d88777e 100755
> --- a/devtools/check-abi.sh
> +++ b/devtools/check-abi.sh
> @@ -46,8 +46,7 @@ for dump in $(find $refdir -name "*.dump"); do
>  	fi
>  	dump2=$(find $newdir -name $name)
>  	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
> -		echo "Error: can't find $name in $newdir"
> -		error=1
> +		echo "WARNING: can't find $name in $newdir, are you building with all dependencies?"
>  		continue
>  	fi
>  	abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
> 

Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6 2/4] devtools: abi and UX changes for test-meson-builds.sh
  2020-10-12 13:03 25%     ` [dpdk-dev] [PATCH v6 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
@ 2020-10-14  9:43  4%       ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14  9:43 UTC (permalink / raw)
  To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev



On 12/10/2020 14:03, Conor Walsh wrote:
> This patch adds new features to test-meson-builds.sh that help to make
> the process of using the script easier, the patch also includes
> changes to make the abi breakage checks more performant.

Avoid commentary such as the above. 

I reduce the following list of bullets to a single paragraph describing the change. 
The core of this change is to improve build times.
So describe reducing the number of build to 2 and using the pre-build references, and thats it. 

> Changes/Additions:
>  - Command line arguments added, the changes are fully backwards
>    compatible and all previous environmental variables are still supported
>  - All paths supplied by user are converted to absolute paths if they
>    are relative as meson has a bug that can sometimes error if a
>    relative path is supplied to it.
>  - abi check/generation code moved to function to improve readability
>  - Only 2 abi checks will now be completed:
>     - 1 x86_64 gcc or clang check
>     - 1 ARM gcc or clang check
>    It is not necessary to check abi breakages in every build
>  - abi checks can now make use of prebuilt abi references from a http
>    or local source, it is hoped these would be hosted on dpdk.org in
>    the future.

<new line to aid reading>

> Invoke using "./test-meson-builds.sh [-b <build directory>]
>    [-a <dpdk tag or latest for abi check>] [-u <uri for abi references>]
>    [-d <directory for abi references>]"
>  - <build directory>: directory to store builds (relative or absolute)
>  - <dpdk tag or latest for abi check>: dpdk tag e.g. "v20.11" or "latest"
>  - <uri for abi references>: http location or directory to get prebuilt
>    abi references from
>  - <directory for abi references>: directory to store abi references
>    (relative or absolute)
> e.g. "./test-meson-builds.sh -a latest"
> If no flags are specified test-meson-builds.sh will run the standard
> meson tests with default options unless environmental variables are
> specified.
> 
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
> 
> ---
>  devtools/test-meson-builds.sh | 170 +++++++++++++++++++++++++++-------
>  1 file changed, 138 insertions(+), 32 deletions(-)
> 
> diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
> index a87de635a..b45506fb0 100755
> --- a/devtools/test-meson-builds.sh
> +++ b/devtools/test-meson-builds.sh
> @@ -1,12 +1,73 @@
>  #! /bin/sh -e
>  # SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2018 Intel Corporation
> +# Copyright(c) 2018-2020 Intel Corporation
>  
>  # Run meson to auto-configure the various builds.
>  # * all builds get put in a directory whose name starts with "build-"
>  # * if a build-directory already exists we assume it was properly configured
>  # Run ninja after configuration is done.
>  
> +# Get arguments
> +usage()
> +{
> +	echo "Usage: $0
> +	      [-b <build directory>]
> +	      [-a <dpdk tag or latest for abi check>]
> +	      [-u <uri for abi references>]
> +	      [-d <directory for abi references>]" 1>&2; exit 1;
> +}
> +
> +DPDK_ABI_DEFAULT_URI="http://dpdk.org/abi-refs"
> +
> +while getopts "a:u:d:b:h" arg; do
> +	case $arg in
> +	a)
> +		if [ -n "$DPDK_ABI_REF_VERSION" ]; then
> +			echo "DPDK_ABI_REF_VERSION and -a cannot both be set"
> +			exit 1
> +		fi
> +		DPDK_ABI_REF_VERSION=${OPTARG} ;;
> +	u)
> +		if [ -n "$DPDK_ABI_TAR_URI" ]; then
> +			echo "DPDK_ABI_TAR_URI and -u cannot both be set"
> +			exit 1
> +		fi
> +		DPDK_ABI_TAR_URI=${OPTARG} ;;
> +	d)
> +		if [ -n "$DPDK_ABI_REF_DIR" ]; then
> +			echo "DPDK_ABI_REF_DIR and -d cannot both be set"
> +			exit 1
> +		fi
> +		DPDK_ABI_REF_DIR=${OPTARG} ;;
> +	b)
> +		if [ -n "$DPDK_BUILD_TEST_DIR" ]; then
> +			echo "DPDK_BUILD_TEST_DIR and -a cannot both be set"
> +			exit 1
> +		fi
> +		DPDK_BUILD_TEST_DIR=${OPTARG} ;;
> +	h)
> +		usage ;;
> +	*)
> +		usage ;;
> +	esac
> +done
> +
> +if [ -n "$DPDK_ABI_REF_VERSION" ] ; then
> +	if [ "$DPDK_ABI_REF_VERSION" = "latest" ] ; then
> +		DPDK_ABI_REF_VERSION=$(git ls-remote --tags http://dpdk.org/git/dpdk |
> +	        	sed "s/.*\///" | grep -v "r\|{}" |
> +			grep '^[^.]*.[^.]*$' | tail -n 1)
> +	elif [ -z "$(git ls-remote http://dpdk.org/git/dpdk refs/tags/$DPDK_ABI_REF_VERSION)" ] ; then
> +		echo "$DPDK_ABI_REF_VERSION is not a valid DPDK tag"
> +		exit 1
> +	fi
> +fi
> +if [ -z $DPDK_ABI_TAR_URI ] ; then
> +	DPDK_ABI_TAR_URI=$DPDK_ABI_DEFAULT_URI
> +fi
> +# allow the generation script to override value with env var
> +abi_checks_done=${DPDK_ABI_GEN_REF:-0}
> +
>  # set pipefail option if possible
>  PIPEFAIL=""
>  set -o | grep -q pipefail && set -o pipefail && PIPEFAIL=1
> @@ -16,7 +77,11 @@ srcdir=$(dirname $(readlink -f $0))/..
>  
>  MESON=${MESON:-meson}
>  use_shared="--default-library=shared"
> -builds_dir=${DPDK_BUILD_TEST_DIR:-.}
> +builds_dir=${DPDK_BUILD_TEST_DIR:-$srcdir/builds}
> +# ensure path is absolute meson returns error when some paths are relative
> +if echo "$builds_dir" | grep -qv '^/'; then
> +        builds_dir=$srcdir/$builds_dir
> +fi
>  
>  if command -v gmake >/dev/null 2>&1 ; then
>  	MAKE=gmake
> @@ -123,39 +188,49 @@ install_target () # <builddir> <installdir>
>  	fi
>  }
>  
> -build () # <directory> <target compiler | cross file> <meson options>
> +abi_gen_check () # no options
>  {
> -	targetdir=$1
> -	shift
> -	crossfile=
> -	[ -r $1 ] && crossfile=$1 || targetcc=$1
> -	shift
> -	# skip build if compiler not available
> -	command -v ${CC##* } >/dev/null 2>&1 || return 0
> -	if [ -n "$crossfile" ] ; then
> -		cross="--cross-file $crossfile"
> -		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
> -			$crossfile | tr -d "'" | tr -d '"')
> -	else
> -		cross=
> +	abirefdir=${DPDK_ABI_REF_DIR:-$builds_dir/__reference}/$DPDK_ABI_REF_VERSION
> +	mkdir -p $abirefdir
> +	# ensure path is absolute meson returns error when some are relative
> +	if echo "$abirefdir" | grep -qv '^/'; then
> +		abirefdir=$srcdir/$abirefdir
>  	fi
> -	load_env $targetcc || return 0
> -	config $srcdir $builds_dir/$targetdir $cross --werror $*
> -	compile $builds_dir/$targetdir
> -	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
> -		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
> -		if [ ! -d $abirefdir/$targetdir ]; then
> +	if [ ! -d $abirefdir/$targetdir ]; then
> +
> +		# try to get abi reference
> +		if echo "$DPDK_ABI_TAR_URI" | grep -q '^http'; then
> +			if [ $abi_checks_done -gt -1 ]; then
> +				if curl --head --fail --silent \
> +					"$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz" \
> +					>/dev/null; then
> +					curl -o $abirefdir/$targetdir.tar.gz \
> +					$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz
> +				fi
> +			fi
> +		elif [ $abi_checks_done -gt -1 ]; then
> +			if [ -f "$DPDK_ABI_TAR_URI/$targetdir.tar.gz" ]; then
> +				cp $DPDK_ABI_TAR_URI/$targetdir.tar.gz \
> +					$abirefdir/
> +			fi
> +		fi
> +		if [ -f "$abirefdir/$targetdir.tar.gz" ]; then
> +			tar -xf $abirefdir/$targetdir.tar.gz \
> +				-C $abirefdir >/dev/null
> +			rm -rf $abirefdir/$targetdir.tar.gz
> +		# if no reference can be found then generate one
> +		else
>  			# clone current sources
>  			if [ ! -d $abirefdir/src ]; then
>  				git clone --local --no-hardlinks \
> -					--single-branch \
> -					-b $DPDK_ABI_REF_VERSION \
> -					$srcdir $abirefdir/src
> +					  --single-branch \
> +					  -b $DPDK_ABI_REF_VERSION \
> +					  $srcdir $abirefdir/src
>  			fi
>  
>  			rm -rf $abirefdir/build
>  			config $abirefdir/src $abirefdir/build $cross \
> -				-Dexamples= $*
> +			       -Dexamples= $*
>  			compile $abirefdir/build
>  			install_target $abirefdir/build $abirefdir/$targetdir
>  			$srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
> @@ -164,17 +239,46 @@ build () # <directory> <target compiler | cross file> <meson options>
>  			find $abirefdir/$targetdir/usr/local -name '*.a' -delete
>  			rm -rf $abirefdir/$targetdir/usr/local/bin
>  			rm -rf $abirefdir/$targetdir/usr/local/share
> +			rm -rf $abirefdir/$targetdir/usr/local/lib
>  		fi
> +	fi
>  
> -		install_target $builds_dir/$targetdir \
> -			$(readlink -f $builds_dir/$targetdir/install)
> -		$srcdir/devtools/gen-abi.sh \
> -			$(readlink -f $builds_dir/$targetdir/install)
> +	install_target $builds_dir/$targetdir \
> +		$(readlink -f $builds_dir/$targetdir/install)
> +	$srcdir/devtools/gen-abi.sh \
> +		$(readlink -f $builds_dir/$targetdir/install)
> +	# check abi if not generating references
> +	if [ -z $DPDK_ABI_GEN_REF ] ; then
>  		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
>  			$(readlink -f $builds_dir/$targetdir/install)
>  	fi
>  }
>  
> +build () # <directory> <target compiler | cross file> <meson options>
> +{
> +	targetdir=$1
> +	shift
> +	crossfile=
> +	[ -r $1 ] && crossfile=$1 || targetcc=$1
> +	shift
> +	# skip build if compiler not available
> +	command -v ${CC##* } >/dev/null 2>&1 || return 0
> +	if [ -n "$crossfile" ] ; then
> +		cross="--cross-file $crossfile"
> +		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
> +			$crossfile | tr -d "'" | tr -d '"')
> +	else
> +		cross=
> +	fi
> +	load_env $targetcc || return 0
> +	config $srcdir $builds_dir/$targetdir $cross --werror $*
> +	compile $builds_dir/$targetdir
> +	if [ -n "$DPDK_ABI_REF_VERSION" ] && [ $abi_checks_done -lt 1 ] ; then
> +		abi_gen_check
> +		abi_checks_done=$((abi_checks_done+1))
> +	fi
> +}
> +
>  if [ "$1" = "-vv" ] ; then
>  	TEST_MESON_BUILD_VERY_VERBOSE=1
>  elif [ "$1" = "-v" ] ; then
> @@ -189,7 +293,7 @@ fi
>  # shared and static linked builds with gcc and clang
>  for c in gcc clang ; do
>  	command -v $c >/dev/null 2>&1 || continue
> -	for s in static shared ; do
> +	for s in shared static ; do
>  		export CC="$CCACHE $c"
>  		build build-$c-$s $c --default-library=$s
>  		unset CC
> @@ -211,6 +315,8 @@ build build-x86-mingw $srcdir/config/x86/cross-mingw -Dexamples=helloworld
>  
>  # generic armv8a with clang as host compiler
>  f=$srcdir/config/arm/arm64_armv8_linux_gcc
> +# run abi checks with 1 arm build
> +abi_checks_done=$((abi_checks_done-1))
>  export CC="clang"
>  build build-arm64-host-clang $f $use_shared
>  unset CC
> @@ -231,7 +337,7 @@ done
>  build_path=$(readlink -f $builds_dir/build-x86-default)
>  export DESTDIR=$build_path/install
>  # No need to reinstall if ABI checks are enabled
> -if [ -z "$DPDK_ABI_REF_VERSION" ]; then
> +if [ -z "$DPDK_ABI_REF_VERSION" ] ; then
>  	install_target $build_path $DESTDIR
>  fi
>  
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives
  2020-10-12 13:03 21%     ` [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
@ 2020-10-14  9:38  4%       ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14  9:38 UTC (permalink / raw)
  To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev



On 12/10/2020 14:03, Conor Walsh wrote:
> This patch adds a script that generates compressed archives
> containing .dump files which can be used to perform abi
> breakage checking in test-meson-build.sh.

<new line to aid reading>

> Invoke using "./gen-abi-tarballs.sh [-v <dpdk tag>]"
>  - <dpdk tag>: dpdk tag e.g. "v20.11" or "latest"
> e.g. "./gen-abi-tarballs.sh -v latest"

<new line to aid reading>

> If no tag is specified, the script will default to "latest"
> Using these parameters the script will produce several *.tar.gz
> archives containing .dump files required to do abi breakage checking
> 
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
> 
> ---
>  devtools/gen-abi-tarballs.sh | 48 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 48 insertions(+)
>  create mode 100755 devtools/gen-abi-tarballs.sh
> 
> diff --git a/devtools/gen-abi-tarballs.sh b/devtools/gen-abi-tarballs.sh
> new file mode 100755
> index 000000000..bcc1beac5
> --- /dev/null
> +++ b/devtools/gen-abi-tarballs.sh
> @@ -0,0 +1,48 @@
> +#! /bin/sh -e
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2020 Intel Corporation
> +
> +# Generate the required prebuilt ABI references for test-meson-build.sh
> +
> +# Get arguments
> +usage() { echo "Usage: $0 [-v <dpdk tag or latest>]" 1>&2; exit 1; }
> +abi_tag=
> +while getopts "v:h" arg; do
> +	case $arg in
> +	v)
> +		if [ -n "$DPDK_ABI_REF_VERSION" ]; then
> +			echo "DPDK_ABI_REF_VERSION and -v cannot both be set"
> +			exit 1
> +		fi
> +		DPDK_ABI_REF_VERSION=${OPTARG} ;;
> +	h)
> +		usage ;;
> +	*)
> +		usage ;;
> +	esac
> +done
> +
> +if [ -z $DPDK_ABI_REF_VERSION ] ; then
> +	DPDK_ABI_REF_VERSION="latest"
> +fi
> +
> +srcdir=$(dirname $(readlink -f $0))/..
> +
> +DPDK_ABI_GEN_REF=-20
> +DPDK_ABI_REF_DIR=$srcdir/__abitarballs
> +
> +. $srcdir/devtools/test-meson-builds.sh
> +
> +abirefdir=$DPDK_ABI_REF_DIR/$DPDK_ABI_REF_VERSION
> +
> +rm -rf $abirefdir/build-*.tar.gz
> +cd $abirefdir
> +for f in build-* ; do
> +	tar -czf $f.tar.gz $f
> +done
> +cp *.tar.gz ../
> +rm -rf *
> +mv ../*.tar.gz .
> +rm -rf build-x86-default.tar.gz
> +
> +echo "The references for $DPDK_ABI_REF_VERSION are now available in $abirefdir"
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks
  2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
                       ` (3 preceding siblings ...)
  2020-10-12 13:03 18%     ` [dpdk-dev] [PATCH v6 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
@ 2020-10-14  9:37  4%     ` Kinsella, Ray
  2020-10-14 10:33  4%       ` Walsh, Conor
  2020-10-14 10:41 10%     ` [dpdk-dev] [PATCH v7 " Conor Walsh
  5 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-10-14  9:37 UTC (permalink / raw)
  To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev



On 12/10/2020 14:03, Conor Walsh wrote:
> This patchset will help developers discover abi breakages more easily
> before upstreaming their code. Currently checking that the DPDK ABI
> has not changed before up-streaming code is not intuitive and the
> process is time consuming. Currently contributors must use the
> test-meson-builds.sh tool, alongside some environmental variables to
> test their changes. Contributors in many cases are either unaware or
> unable to do this themselves, leading to a potentially serious situation
> where they are unknowingly up-streaming code that breaks the ABI. These
> breakages are caught by Travis, but it would be more efficient if they
> were caught locally before up-streaming. 

I would remove everything in the git log text before this line... 

> This patchset introduces changes
> to test-meson-builds.sh, check-abi.sh and adds a new script
> gen-abi-tarballs.sh. The changes to test-meson-builds.sh include UX

UX changes = improvements

> changes such as adding command line arguments and allowing the use of
> relative paths. Reduced the number of abi checks to just two, one for both
> x86_64 and ARM, the references for these tests can now be prebuilt and
> downloaded by test-meson-builds.sh, these changes will allow the tests to
> run much faster. check-abi.sh is updated to use the prebuilt references.
> gen-abi-tarballs.sh is a new script to generate the prebuilt abi
> references used by test-meson-builds.sh, these compressed archives can be
> retrieved from either a local directory or a remote http location.
> 
> ---
> v6: Corrected a mistake in the doc patch
> 
> v5:
>  - Patchset has been completely reworked following feedback
>  - Patchset is now part of test-meson-builds.sh not the meson build system
> 
> v4:
>  - Reworked both Python scripts to use more native Python functions
>    and modules.
>  - Python scripts are now in line with how other Python scripts in
>    DPDK are structured.
> 
> v3:
>  - Fix for bug which now allows meson < 0.48.0 to be used
>  - Various coding style changes throughout
>  - Minor bug fixes to the various meson.build files
> 
> v2: Spelling mistake, corrected spelling of environmental
> 
> Conor Walsh (4):
>   devtools: add generation of compressed abi dump archives
>   devtools: abi and UX changes for test-meson-builds.sh
>   devtools: change dump file not found to warning in check-abi.sh
>   doc: test-meson-builds.sh doc updates
> 
>  devtools/check-abi.sh               |   3 +-
>  devtools/gen-abi-tarballs.sh        |  48 ++++++++
>  devtools/test-meson-builds.sh       | 170 ++++++++++++++++++++++------
>  doc/guides/contributing/patches.rst |  26 +++--
>  4 files changed, 201 insertions(+), 46 deletions(-)
>  create mode 100755 devtools/gen-abi-tarballs.sh
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods
  2020-10-06 16:07  3%     ` Ananyev, Konstantin
@ 2020-10-14  9:23  4%       ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14  9:23 UTC (permalink / raw)
  To: Ananyev, Konstantin, David Marchand
  Cc: dev, Jerin Jacob Kollanukkaran,
	Ruifeng Wang (Arm Technology China),
	Medvedkin, Vladimir, Thomas Monjalon, Richardson, Bruce



On 06/10/2020 17:07, Ananyev, Konstantin wrote:
> 
>>
>> On Mon, Oct 5, 2020 at 9:44 PM Konstantin Ananyev
>> <konstantin.ananyev@intel.com> wrote:
>>>
>>> These patch series introduce support of AVX512 specific classify
>>> implementation for ACL library.
>>> It adds two new algorithms:
>>>  - RTE_ACL_CLASSIFY_AVX512X16 - can process up to 16 flows in parallel.
>>>    It uses 256-bit width instructions/registers only
>>>    (to avoid frequency level change).
>>>    On my SKX box test-acl shows ~15-30% improvement
>>>    (depending on rule-set and input burst size)
>>>    when switching from AVX2 to AVX512X16 classify algorithms.
>>>  - RTE_ACL_CLASSIFY_AVX512X32 - can process up to 32 flows in parallel.
>>>    It uses 512-bit width instructions/registers and provides higher
>>>    performance then AVX512X16, but can cause frequency level change.
>>>    On my SKX box test-acl shows ~50-70% improvement
>>>    (depending on rule-set and input burst size)
>>>    when switching from AVX2 to AVX512X32 classify algorithms.
>>>    ICX and CLX testing showed similar level of speedup.
>>>
>>> Current AVX512 classify implementation is only supported on x86_64.
>>> Note that this series introduce a formal ABI incompatibility
>>
>> The only API change I can see is in rte_acl_classify_alg() new error
>> code but I don't think we need an announcement for this.
>> As for ABI, we are breaking it in this release, so I see no pb.
> 
> Cool, I just wanted to underline that patch #3:
> https://patches.dpdk.org/patch/79786/
> is a formal ABI breakage.

As David said, this is an ABI breaking release - so there is no requirement to maintain compatibility. 

https://doc.dpdk.org/guides/contributing/abi_policy.html

However the following requirements remain:-

* The acknowledgment of the maintainer of the component is mandatory, or if no maintainer is available for the component, the tree/sub-tree maintainer for that component must acknowledge the ABI change instead.
* The acknowledgment of three members of the technical board, as delegates of the technical board acknowledging the need for the ABI change, is also mandatory.

I guess you are the maintainer in this case, so that requirement is satisfied. 

> 
>>
>>
>>> with previous versions of ACL library.
>>>
>>> v2 -> v3:
>>>   Fix checkpatch warnings
>>>   Split AVX512 algorithm into two and deduplicate common code
>>
>> Patch 7 still references a RTE_MACHINE_CPUFLAG flag.
>> Can you rework now that those flags have been dropped?
>>
> 
> Should be fixed in v4:
> https://patches.dpdk.org/project/dpdk/list/?series=12721
> 
> One more thing to mention - this series has a dependency on Vladimir's patch:
> https://patches.dpdk.org/patch/79310/ ("eal/x86: introduce AVX 512-bit type"),
> so CI/travis would still report an error.
> 
> Thanks
> Konstantin
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  2020-10-13 19:06  0%                     ` Medvedkin, Vladimir
@ 2020-10-13 19:48  0%                       ` Michel Machado
  2020-10-14 13:10  0%                         ` Medvedkin, Vladimir
  0 siblings, 1 reply; 200+ results
From: Michel Machado @ 2020-10-13 19:48 UTC (permalink / raw)
  To: Medvedkin, Vladimir, Kevin Traynor, Ruifeng Wang,
	Bruce Richardson, Cody Doucette, Andre Nathan, Qiaobin Fu
  Cc: dev, Honnappa Nagarahalli, nd

On 10/13/20 3:06 PM, Medvedkin, Vladimir wrote:
> 
> 
> On 13/10/2020 18:46, Michel Machado wrote:
>> On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
>>> Hi Michel,
>>>
>>> Could you please describe a condition when LPM gets inconsistent? As 
>>> I can see if there is no free tbl8 it will return -ENOSPC.
>>
>>     Consider this simple example, we need to add the following two 
>> prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If 
>> the LPM table is out of tbl8s, the second prefix is not added and 
>> Gatekeeper will make decisions in violation of the policy. The data 
>> structure of the LPM table is consistent, but its content inconsistent 
>> with the policy.
> 
> Aha, thanks. So do I understand correctly that you need to add a set of 
> routes atomically (either the entire set is installed or nothing)?

    Yes.

> If so, then I would suggest having 2 lpm and switching them atomically 
> after a successful addition. As for now, even if you have enough tbl8's, 
> routes are installed non atomically, i.e. there will be a time gap 
> between adding two routes, so in this time interval the table will be 
> inconsistent with the policy.
> Also, if new lpm algorithms are added to the DPDK, they won't have such 
> a thing as tbl8.

    Our code already deals with synchronization.

>>     We minimize the need of replacing a LPM table by allocating LPM 
>> tables with the double of what we need (see example here 
>> https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183), 
>> but the code must be ready for unexpected needs that may arise in 
>> production.
>>
> 
> Usually, the table is initialized with a large enough number of entries, 
> enough to add a possible number of routes. One tbl8 group takes up 1Kb 
> of memory which is nothing comparing to the size of tbl24 which is 64Mb.

    When the prefixes come from BGP, initializing a large enough table 
is fine. But when prefixes come from threat intelligence, the number of 
prefixes can vary wildly and the number of prefixes above 24 bits are 
way more common.

> P.S. consider using rte_fib library, it has a number of advantages over 
> LPM. You can replace the loop in __lookup_fib_bulk() with a bulk lookup 
> call and this will probably increase the speed.

    I'm not aware of the rte_fib library. The only documentation that I 
found on Google was https://doc.dpdk.org/api/rte__fib_8h.html and it 
just says "FIB (Forwarding information base) implementation for IPv4 
Longest Prefix Match".

>>>
>>> On 13/10/2020 15:58, Michel Machado wrote:
>>>> Hi Kevin,
>>>>
>>>>     We do need fields max_rules and number_tbl8s of struct rte_lpm, 
>>>> so the removal would force us to have another patch to our local 
>>>> copy of DPDK. We'd rather avoid this new local patch because we wish 
>>>> to eventually be in sync with the stock DPDK.
>>>>
>>>>     Those fields are needed in Gatekeeper because we found a 
>>>> condition in an ongoing deployment in which the entries of some LPM 
>>>> tables may suddenly change a lot to reflect policy changes. To avoid 
>>>> getting into a state in which the LPM table is inconsistent because 
>>>> it cannot fit all the new entries, we compute the needed parameters 
>>>> to support the new entries, and compare with the current parameters. 
>>>> If the current table doesn't fit everything, we have to replace it 
>>>> with a new LPM table.
>>>>
>>>>     If there were a way to obtain the struct rte_lpm_config of a 
>>>> given LPM table, it would cleanly address our need. We have the same 
>>>> need in IPv6 and have a local patch to work around it (see 
>>>> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f). 
>>>> Thus, an IPv4 and IPv6 solution would be best.
>>>>
>>>>     PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to 
>>>> this disscussion.
>>>>
>>>> [ ]'s
>>>> Michel Machado
>>>>
>>>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>>>>> Hi Gatekeeper maintainers (I think),
>>>>>
>>>>> fyi - there is a proposal to remove some members of a struct in 
>>>>> DPDK LPM
>>>>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
>>>>> as it's an LTS I guess it would probably hit Debian in a few months.
>>>>>
>>>>> The full thread is here:
>>>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>>>>
>>>>> Maybe you can take a look and tell us if they are needed in Gatekeeper
>>>>> or you can workaround it?
>>>>>
>>>>> thanks,
>>>>> Kevin.
>>>>>
>>>>> [1]
>>>>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248 
>>>>>
>>>>>
>>>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>>>>> <bruce.richardson@intel.com>
>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>>>>
>>>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>>>>> <Ruifeng.Wang@arm.com>
>>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>>>>
>>>>>>>>> Hi Ruifeng,
>>>>>>>>>
>>>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no 
>>>>>>>>>>> need to
>>>>>>>>>>> be exposed to the user.
>>>>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>>>>> maintainability.
>>>>>>>>>>>
>>>>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>>>>> ---
>>>>>>>>>>>    lib/librte_lpm/rte_lpm.c | 152
>>>>>>>>>>> +++++++++++++++++++++++---------------
>>>>>>>>> -
>>>>>>>>>>>    lib/librte_lpm/rte_lpm.h |   7 --
>>>>>>>>>>>    2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>>>>
>>>>>>>>>> <snip>
>>>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>>>>
>>>>>>>>>>>    /** @internal LPM structure. */
>>>>>>>>>>>    struct rte_lpm {
>>>>>>>>>>> -    /* LPM metadata. */
>>>>>>>>>>> -    char name[RTE_LPM_NAMESIZE];        /**< Name of the 
>>>>>>>>>>> lpm. */
>>>>>>>>>>> -    uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>>>>> -    uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>>>>> -    struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>>>>>> Rule info table. */
>>>>>>>>>>> -
>>>>>>>>>>>        /* LPM Tables. */
>>>>>>>>>>>        struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>>>>                __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>>>>        struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>>>>> -    struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>>>>    };
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>>>>
>>>>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>>>>> different, and that return value could be used by 
>>>>>>>>>> rte_lpm_lookup()
>>>>>>>>>> which as a static inline function will be in the binary and using
>>>>>>>>>> the old structure offsets.]
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>>>>> without prior notice.
>>>>>>>>>
>>>>>>>> So if the change wants to happen in 20.11, a deprecation notice 
>>>>>>>> should
>>>>>>>> have been added in 20.08.
>>>>>>>> I should have added a deprecation notice. This change will have 
>>>>>>>> to wait for
>>>>>>> next ABI update window.
>>>>>>>>
>>>>>>>
>>>>>>> Do you plan to extend? or is this just speculative?
>>>>>> It is speculative.
>>>>>>
>>>>>>>
>>>>>>> A quick scan and there seems to be several projects using some of 
>>>>>>> these
>>>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>>>>> gatekeeper. I didn't look at the details to see if they are 
>>>>>>> really needed.
>>>>>>>
>>>>>>> Not sure how much notice they'd need or if they update DPDK much, 
>>>>>>> but I
>>>>>>> think it's worth having a closer look as to how they use lpm and 
>>>>>>> what the
>>>>>>> impact to them is.
>>>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't 
>>>>>> access the members to be hided.
>>>>>> They will not be impacted by this patch.
>>>>>> But Gatekeeper accesses the rte_lpm internal members that to be 
>>>>>> hided. Its compilation will be broken with this patch.
>>>>>>
>>>>>>>
>>>>>>>> Thanks.
>>>>>>>> Ruifeng
>>>>>>>>>>>    /** LPM RCU QSBR configuration structure. */
>>>>>>>>>>> -- 
>>>>>>>>>>> 2.17.1
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -- 
>>>>>>>>> Regards,
>>>>>>>>> Vladimir
>>>>>>
>>>>>
>>>
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  2020-10-13 15:41  0%                 ` Medvedkin, Vladimir
@ 2020-10-13 17:46  0%                   ` Michel Machado
  2020-10-13 19:06  0%                     ` Medvedkin, Vladimir
  0 siblings, 1 reply; 200+ results
From: Michel Machado @ 2020-10-13 17:46 UTC (permalink / raw)
  To: Medvedkin, Vladimir, Kevin Traynor, Ruifeng Wang,
	Bruce Richardson, Cody Doucette, Andre Nathan, Qiaobin Fu
  Cc: dev, Honnappa Nagarahalli, nd

On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
> Hi Michel,
> 
> Could you please describe a condition when LPM gets inconsistent? As I 
> can see if there is no free tbl8 it will return -ENOSPC.

    Consider this simple example, we need to add the following two 
prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If the 
LPM table is out of tbl8s, the second prefix is not added and Gatekeeper 
will make decisions in violation of the policy. The data structure of 
the LPM table is consistent, but its content inconsistent with the policy.

    We minimize the need of replacing a LPM table by allocating LPM 
tables with the double of what we need (see example here 
https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183), 
but the code must be ready for unexpected needs that may arise in 
production.

> 
> On 13/10/2020 15:58, Michel Machado wrote:
>> Hi Kevin,
>>
>>     We do need fields max_rules and number_tbl8s of struct rte_lpm, so 
>> the removal would force us to have another patch to our local copy of 
>> DPDK. We'd rather avoid this new local patch because we wish to 
>> eventually be in sync with the stock DPDK.
>>
>>     Those fields are needed in Gatekeeper because we found a condition 
>> in an ongoing deployment in which the entries of some LPM tables may 
>> suddenly change a lot to reflect policy changes. To avoid getting into 
>> a state in which the LPM table is inconsistent because it cannot fit 
>> all the new entries, we compute the needed parameters to support the 
>> new entries, and compare with the current parameters. If the current 
>> table doesn't fit everything, we have to replace it with a new LPM table.
>>
>>     If there were a way to obtain the struct rte_lpm_config of a given 
>> LPM table, it would cleanly address our need. We have the same need in 
>> IPv6 and have a local patch to work around it (see 
>> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f). 
>> Thus, an IPv4 and IPv6 solution would be best.
>>
>>     PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to this 
>> disscussion.
>>
>> [ ]'s
>> Michel Machado
>>
>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>>> Hi Gatekeeper maintainers (I think),
>>>
>>> fyi - there is a proposal to remove some members of a struct in DPDK LPM
>>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
>>> as it's an LTS I guess it would probably hit Debian in a few months.
>>>
>>> The full thread is here:
>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>>
>>> Maybe you can take a look and tell us if they are needed in Gatekeeper
>>> or you can workaround it?
>>>
>>> thanks,
>>> Kevin.
>>>
>>> [1]
>>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248 
>>>
>>>
>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>>
>>>>> -----Original Message-----
>>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>>> <bruce.richardson@intel.com>
>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>>
>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>>> <Ruifeng.Wang@arm.com>
>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>>
>>>>>>> Hi Ruifeng,
>>>>>>>
>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>>>>> be exposed to the user.
>>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>>> maintainability.
>>>>>>>>>
>>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>>> ---
>>>>>>>>>    lib/librte_lpm/rte_lpm.c | 152
>>>>>>>>> +++++++++++++++++++++++---------------
>>>>>>> -
>>>>>>>>>    lib/librte_lpm/rte_lpm.h |   7 --
>>>>>>>>>    2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>>
>>>>>>>> <snip>
>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>>
>>>>>>>>>    /** @internal LPM structure. */
>>>>>>>>>    struct rte_lpm {
>>>>>>>>> -    /* LPM metadata. */
>>>>>>>>> -    char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
>>>>>>>>> -    uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>>> -    uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>>> -    struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>>>> Rule info table. */
>>>>>>>>> -
>>>>>>>>>        /* LPM Tables. */
>>>>>>>>>        struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>>                __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>>        struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>>> -    struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>>    };
>>>>>>>>>
>>>>>>>>
>>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>>
>>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>>>>> which as a static inline function will be in the binary and using
>>>>>>>> the old structure offsets.]
>>>>>>>>
>>>>>>>
>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>>> without prior notice.
>>>>>>>
>>>>>> So if the change wants to happen in 20.11, a deprecation notice 
>>>>>> should
>>>>>> have been added in 20.08.
>>>>>> I should have added a deprecation notice. This change will have to 
>>>>>> wait for
>>>>> next ABI update window.
>>>>>>
>>>>>
>>>>> Do you plan to extend? or is this just speculative?
>>>> It is speculative.
>>>>
>>>>>
>>>>> A quick scan and there seems to be several projects using some of 
>>>>> these
>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>>> gatekeeper. I didn't look at the details to see if they are really 
>>>>> needed.
>>>>>
>>>>> Not sure how much notice they'd need or if they update DPDK much, 
>>>>> but I
>>>>> think it's worth having a closer look as to how they use lpm and 
>>>>> what the
>>>>> impact to them is.
>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't 
>>>> access the members to be hided.
>>>> They will not be impacted by this patch.
>>>> But Gatekeeper accesses the rte_lpm internal members that to be 
>>>> hided. Its compilation will be broken with this patch.
>>>>
>>>>>
>>>>>> Thanks.
>>>>>> Ruifeng
>>>>>>>>>    /** LPM RCU QSBR configuration structure. */
>>>>>>>>> -- 
>>>>>>>>> 2.17.1
>>>>>>>>>
>>>>>>>
>>>>>>> -- 
>>>>>>> Regards,
>>>>>>> Vladimir
>>>>
>>>
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  2020-10-13 13:53  0%             ` Kevin Traynor
@ 2020-10-13 14:58  0%               ` Michel Machado
  2020-10-13 15:41  0%                 ` Medvedkin, Vladimir
  0 siblings, 1 reply; 200+ results
From: Michel Machado @ 2020-10-13 14:58 UTC (permalink / raw)
  To: Kevin Traynor, Ruifeng Wang, Medvedkin, Vladimir,
	Bruce Richardson, Cody Doucette, Andre Nathan, Qiaobin Fu
  Cc: dev, Honnappa Nagarahalli, nd

Hi Kevin,

    We do need fields max_rules and number_tbl8s of struct rte_lpm, so 
the removal would force us to have another patch to our local copy of 
DPDK. We'd rather avoid this new local patch because we wish to 
eventually be in sync with the stock DPDK.

    Those fields are needed in Gatekeeper because we found a condition 
in an ongoing deployment in which the entries of some LPM tables may 
suddenly change a lot to reflect policy changes. To avoid getting into a 
state in which the LPM table is inconsistent because it cannot fit all 
the new entries, we compute the needed parameters to support the new 
entries, and compare with the current parameters. If the current table 
doesn't fit everything, we have to replace it with a new LPM table.

    If there were a way to obtain the struct rte_lpm_config of a given 
LPM table, it would cleanly address our need. We have the same need in 
IPv6 and have a local patch to work around it (see 
https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f). 
Thus, an IPv4 and IPv6 solution would be best.

    PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to this 
disscussion.

[ ]'s
Michel Machado

On 10/13/20 9:53 AM, Kevin Traynor wrote:
> Hi Gatekeeper maintainers (I think),
> 
> fyi - there is a proposal to remove some members of a struct in DPDK LPM
> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
> as it's an LTS I guess it would probably hit Debian in a few months.
> 
> The full thread is here:
> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
> 
> Maybe you can take a look and tell us if they are needed in Gatekeeper
> or you can workaround it?
> 
> thanks,
> Kevin.
> 
> [1]
> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248
> 
> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>
>>> -----Original Message-----
>>> From: Kevin Traynor <ktraynor@redhat.com>
>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>> <bruce.richardson@intel.com>
>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>
>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>
>>>>> -----Original Message-----
>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>> <Ruifeng.Wang@arm.com>
>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>
>>>>> Hi Ruifeng,
>>>>>
>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>>> be exposed to the user.
>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>> maintainability.
>>>>>>>
>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>> ---
>>>>>>>    lib/librte_lpm/rte_lpm.c | 152
>>>>>>> +++++++++++++++++++++++---------------
>>>>> -
>>>>>>>    lib/librte_lpm/rte_lpm.h |   7 --
>>>>>>>    2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>
>>>>>> <snip>
>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>
>>>>>>>    /** @internal LPM structure. */
>>>>>>>    struct rte_lpm {
>>>>>>> -	/* LPM metadata. */
>>>>>>> -	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
>>>>>>> -	uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>> -	uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>> -	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>> Rule info table. */
>>>>>>> -
>>>>>>>    	/* LPM Tables. */
>>>>>>>    	struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>    			__rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>    	struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>> -	struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>    };
>>>>>>>
>>>>>>
>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>
>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>>> which as a static inline function will be in the binary and using
>>>>>> the old structure offsets.]
>>>>>>
>>>>>
>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>> without prior notice.
>>>>>
>>>> So if the change wants to happen in 20.11, a deprecation notice should
>>>> have been added in 20.08.
>>>> I should have added a deprecation notice. This change will have to wait for
>>> next ABI update window.
>>>>
>>>
>>> Do you plan to extend? or is this just speculative?
>> It is speculative.
>>
>>>
>>> A quick scan and there seems to be several projects using some of these
>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>> gatekeeper. I didn't look at the details to see if they are really needed.
>>>
>>> Not sure how much notice they'd need or if they update DPDK much, but I
>>> think it's worth having a closer look as to how they use lpm and what the
>>> impact to them is.
>> Checked the projects listed above. BESS, NFF-Go and DPVS don't access the members to be hided.
>> They will not be impacted by this patch.
>> But Gatekeeper accesses the rte_lpm internal members that to be hided. Its compilation will be broken with this patch.
>>
>>>
>>>> Thanks.
>>>> Ruifeng
>>>>>>>    /** LPM RCU QSBR configuration structure. */
>>>>>>> --
>>>>>>> 2.17.1
>>>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Vladimir
>>
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI
  2020-10-12 19:09  4%     ` Pavan Nikhilesh Bhagavatula
@ 2020-10-13 19:20  4%       ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2020-10-13 19:20 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula
  Cc: Van Haaren, Harry, McDaniel, Timothy, Jerin Jacob Kollanukkaran,
	Kovacevic, Marko, Ori Kam, Richardson, Bruce, Nicolau, Radu,
	Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori, dev, Carrillo,
	Erik G, Eads, Gage, hemant.agrawal

On Tue, Oct 13, 2020 at 12:39 AM Pavan Nikhilesh Bhagavatula
<pbhagavatula@marvell.com> wrote:
>
> >> Subject: [PATCH v2 2/2] eventdev: update app and examples for new
> >eventdev ABI
> >>
> >> Several data structures and constants changed, or were added,
> >> in the previous patch.  This commit updates the dependent
> >> apps and examples to use the new ABI.
> >>
> >> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
>
> With fixes to trace framework
> Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

@McDaniel, Timothy ,

The series has apply issues[1], Could you send the final version with
Ack of Harry and Pavan.
I will merge this series for RC1 and Let's move DLB PMD driver updates to RC2.

[1]
[for-main]dell[dpdk-next-eventdev] $ date &&
/home/jerin/config/scripts/build_each_patch.sh /tmp/r/ && date
Wed Oct 14 12:41:19 AM IST 2020
HEAD is now at b7a8eea2c app/eventdev: enable fast free offload
meson build test
Applying: eventdev: eventdev: express DLB/DLB2 PMD constraints
Using index info to reconstruct a base tree...
M       drivers/event/dpaa2/dpaa2_eventdev.c
M       drivers/event/octeontx/ssovf_evdev.c
M       drivers/event/octeontx2/otx2_evdev.c
M       lib/librte_eventdev/rte_event_eth_tx_adapter.c
M       lib/librte_eventdev/rte_eventdev.c
Falling back to patching base and 3-way merge...
Auto-merging lib/librte_eventdev/rte_eventdev.c
CONFLICT (content): Merge conflict in lib/librte_eventdev/rte_eventdev.c
Auto-merging lib/librte_eventdev/rte_event_eth_tx_adapter.c
Auto-merging drivers/event/octeontx2/otx2_evdev.c
Auto-merging drivers/event/octeontx/ssovf_evdev.c
Auto-merging drivers/event/dpaa2/dpaa2_eventdev.c
Recorded preimage for 'lib/librte_eventdev/rte_eventdev.c'
error: Failed to merge in the changes.
Patch failed at 0001 eventdev: eventdev: express DLB/DLB2 PMD constraints
hint: Use 'git am --show-current-patch=diff' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
git am failed /tmp/r//v2-1-2-eventdev-eventdev-express-DLB-DLB2-PMD-constraints
HEAD is now at b7a8eea2c app/eventdev: enable fast free offload
Wed Oct 14 12:41:19 AM IST 2020

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  2020-10-13 17:46  0%                   ` Michel Machado
@ 2020-10-13 19:06  0%                     ` Medvedkin, Vladimir
  2020-10-13 19:48  0%                       ` Michel Machado
  0 siblings, 1 reply; 200+ results
From: Medvedkin, Vladimir @ 2020-10-13 19:06 UTC (permalink / raw)
  To: Michel Machado, Kevin Traynor, Ruifeng Wang, Bruce Richardson,
	Cody Doucette, Andre Nathan, Qiaobin Fu
  Cc: dev, Honnappa Nagarahalli, nd



On 13/10/2020 18:46, Michel Machado wrote:
> On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
>> Hi Michel,
>>
>> Could you please describe a condition when LPM gets inconsistent? As I 
>> can see if there is no free tbl8 it will return -ENOSPC.
> 
>     Consider this simple example, we need to add the following two 
> prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If the 
> LPM table is out of tbl8s, the second prefix is not added and Gatekeeper 
> will make decisions in violation of the policy. The data structure of 
> the LPM table is consistent, but its content inconsistent with the policy.

Aha, thanks. So do I understand correctly that you need to add a set of 
routes atomically (either the entire set is installed or nothing)?

If so, then I would suggest having 2 lpm and switching them atomically 
after a successful addition. As for now, even if you have enough tbl8's, 
routes are installed non atomically, i.e. there will be a time gap 
between adding two routes, so in this time interval the table will be 
inconsistent with the policy.
Also, if new lpm algorithms are added to the DPDK, they won't have such 
a thing as tbl8.

> 
>     We minimize the need of replacing a LPM table by allocating LPM 
> tables with the double of what we need (see example here 
> https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183), 
> but the code must be ready for unexpected needs that may arise in 
> production.
> 

Usually, the table is initialized with a large enough number of entries, 
enough to add a possible number of routes. One tbl8 group takes up 1Kb 
of memory which is nothing comparing to the size of tbl24 which is 64Mb.

P.S. consider using rte_fib library, it has a number of advantages over 
LPM. You can replace the loop in __lookup_fib_bulk() with a bulk lookup 
call and this will probably increase the speed.

>>
>> On 13/10/2020 15:58, Michel Machado wrote:
>>> Hi Kevin,
>>>
>>>     We do need fields max_rules and number_tbl8s of struct rte_lpm, 
>>> so the removal would force us to have another patch to our local copy 
>>> of DPDK. We'd rather avoid this new local patch because we wish to 
>>> eventually be in sync with the stock DPDK.
>>>
>>>     Those fields are needed in Gatekeeper because we found a 
>>> condition in an ongoing deployment in which the entries of some LPM 
>>> tables may suddenly change a lot to reflect policy changes. To avoid 
>>> getting into a state in which the LPM table is inconsistent because 
>>> it cannot fit all the new entries, we compute the needed parameters 
>>> to support the new entries, and compare with the current parameters. 
>>> If the current table doesn't fit everything, we have to replace it 
>>> with a new LPM table.
>>>
>>>     If there were a way to obtain the struct rte_lpm_config of a 
>>> given LPM table, it would cleanly address our need. We have the same 
>>> need in IPv6 and have a local patch to work around it (see 
>>> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f). 
>>> Thus, an IPv4 and IPv6 solution would be best.
>>>
>>>     PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to this 
>>> disscussion.
>>>
>>> [ ]'s
>>> Michel Machado
>>>
>>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>>>> Hi Gatekeeper maintainers (I think),
>>>>
>>>> fyi - there is a proposal to remove some members of a struct in DPDK 
>>>> LPM
>>>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
>>>> as it's an LTS I guess it would probably hit Debian in a few months.
>>>>
>>>> The full thread is here:
>>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>>>
>>>> Maybe you can take a look and tell us if they are needed in Gatekeeper
>>>> or you can workaround it?
>>>>
>>>> thanks,
>>>> Kevin.
>>>>
>>>> [1]
>>>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248 
>>>>
>>>>
>>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>>>> <bruce.richardson@intel.com>
>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>>>
>>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>>>> <Ruifeng.Wang@arm.com>
>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>>>
>>>>>>>> Hi Ruifeng,
>>>>>>>>
>>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>>>>>> be exposed to the user.
>>>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>>>> maintainability.
>>>>>>>>>>
>>>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>>>> ---
>>>>>>>>>>    lib/librte_lpm/rte_lpm.c | 152
>>>>>>>>>> +++++++++++++++++++++++---------------
>>>>>>>> -
>>>>>>>>>>    lib/librte_lpm/rte_lpm.h |   7 --
>>>>>>>>>>    2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>>>
>>>>>>>>> <snip>
>>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>>>
>>>>>>>>>>    /** @internal LPM structure. */
>>>>>>>>>>    struct rte_lpm {
>>>>>>>>>> -    /* LPM metadata. */
>>>>>>>>>> -    char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
>>>>>>>>>> -    uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>>>> -    uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>>>> -    struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>>>>> Rule info table. */
>>>>>>>>>> -
>>>>>>>>>>        /* LPM Tables. */
>>>>>>>>>>        struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>>>                __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>>>        struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>>>> -    struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>>>    };
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>>>
>>>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>>>>>> which as a static inline function will be in the binary and using
>>>>>>>>> the old structure offsets.]
>>>>>>>>>
>>>>>>>>
>>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>>>> without prior notice.
>>>>>>>>
>>>>>>> So if the change wants to happen in 20.11, a deprecation notice 
>>>>>>> should
>>>>>>> have been added in 20.08.
>>>>>>> I should have added a deprecation notice. This change will have 
>>>>>>> to wait for
>>>>>> next ABI update window.
>>>>>>>
>>>>>>
>>>>>> Do you plan to extend? or is this just speculative?
>>>>> It is speculative.
>>>>>
>>>>>>
>>>>>> A quick scan and there seems to be several projects using some of 
>>>>>> these
>>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>>>> gatekeeper. I didn't look at the details to see if they are really 
>>>>>> needed.
>>>>>>
>>>>>> Not sure how much notice they'd need or if they update DPDK much, 
>>>>>> but I
>>>>>> think it's worth having a closer look as to how they use lpm and 
>>>>>> what the
>>>>>> impact to them is.
>>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't 
>>>>> access the members to be hided.
>>>>> They will not be impacted by this patch.
>>>>> But Gatekeeper accesses the rte_lpm internal members that to be 
>>>>> hided. Its compilation will be broken with this patch.
>>>>>
>>>>>>
>>>>>>> Thanks.
>>>>>>> Ruifeng
>>>>>>>>>>    /** LPM RCU QSBR configuration structure. */
>>>>>>>>>> -- 
>>>>>>>>>> 2.17.1
>>>>>>>>>>
>>>>>>>>
>>>>>>>> -- 
>>>>>>>> Regards,
>>>>>>>> Vladimir
>>>>>
>>>>
>>

-- 
Regards,
Vladimir

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4 2/5] ethdev: add new attributes to hairpin config
  @ 2020-10-13 16:19  4%       ` Bing Zhao
  0 siblings, 0 replies; 200+ results
From: Bing Zhao @ 2020-10-13 16:19 UTC (permalink / raw)
  To: thomas, orika, ferruh.yigit, arybchenko, mdr, nhorman,
	bernard.iremonger, beilei.xing, wenzhuo.lu
  Cc: dev

To support two ports hairpin mode and keep the backward compatibility
for the application, two new attribute members of the hairpin queue
configuration structure will be added.

`tx_explicit` means if the application itself will insert the TX part
flow rules. If not set, PMD will insert the rules implicitly.
`manual_bind` means if the hairpin TX queue and peer RX queue will be
bound automatically during the device start stage.

Different TX and RX queue pairs could have different values, but it
is highly recommended that all paired queues between one egress and
its peer ingress ports have the same values, in order not to bring
any chaos to the system. The actual support of these attribute
parameters will be checked and decided by the PMD drivers.

In the single port hairpin, if both are zero without any setting, the
behavior will remain the same as before. It means that no bind API
needs to be called and no TX flow rules need to be inserted manually
by the application.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
v4: squash document update and more info for the two new attributes
v2: optimize the structure and remove unused macros
---
 doc/guides/prog_guide/rte_flow.rst     |  3 +++
 doc/guides/rel_notes/release_20_11.rst |  6 ++++++
 lib/librte_ethdev/rte_ethdev.c         |  8 ++++----
 lib/librte_ethdev/rte_ethdev.h         | 27 ++++++++++++++++++++++++++-
 4 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 119b128..bb54d67 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2592,6 +2592,9 @@ set, unpredictable value will be seen depending on driver implementation. For
 loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to
 the other path depending on HW capability.
 
+In hairpin case with TX explicit flow mode, metadata could (not mandatory) be
+used to connect the RX and TX flows if it can be propagated from RX to TX path.
+
 .. _table_rte_flow_action_set_meta:
 
 .. table:: SET_META
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6b3d223..a1e20a6 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -63,6 +63,7 @@ New Features
 * **Updated the ethdev library to support hairpin between two ports.**
 
   New APIs are introduced to support binding / unbinding 2 ports hairpin.
+  Hairpin TX part flow rules can be inserted explicitly.
 
 * **Updated Broadcom bnxt driver.**
 
@@ -318,6 +319,11 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * ``struct rte_eth_hairpin_conf`` has two new members:
+
+    * ``uint32_t tx_explicit:1;``
+    * ``uint32_t manual_bind:1;``
+
 
 Known Issues
 ------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index b6371fb..14b9f3a 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1954,13 +1954,13 @@ struct rte_eth_dev *
 	}
 	if (conf->peer_count > cap.max_rx_2_tx) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Rx queue(=%hu), should be: <= %hu",
+			"Invalid value for number of peers for Rx queue(=%u), should be: <= %hu",
 			conf->peer_count, cap.max_rx_2_tx);
 		return -EINVAL;
 	}
 	if (conf->peer_count == 0) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Rx queue(=%hu), should be: > 0",
+			"Invalid value for number of peers for Rx queue(=%u), should be: > 0",
 			conf->peer_count);
 		return -EINVAL;
 	}
@@ -2125,13 +2125,13 @@ struct rte_eth_dev *
 	}
 	if (conf->peer_count > cap.max_tx_2_rx) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Tx queue(=%hu), should be: <= %hu",
+			"Invalid value for number of peers for Tx queue(=%u), should be: <= %hu",
 			conf->peer_count, cap.max_tx_2_rx);
 		return -EINVAL;
 	}
 	if (conf->peer_count == 0) {
 		RTE_ETHDEV_LOG(ERR,
-			"Invalid value for number of peers for Tx queue(=%hu), should be: > 0",
+			"Invalid value for number of peers for Tx queue(=%u), should be: > 0",
 			conf->peer_count);
 		return -EINVAL;
 	}
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 5106098..938df08 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1045,7 +1045,32 @@ struct rte_eth_hairpin_peer {
  * A structure used to configure hairpin binding.
  */
 struct rte_eth_hairpin_conf {
-	uint16_t peer_count; /**< The number of peers. */
+	uint32_t peer_count:16; /**< The number of peers. */
+
+	/**
+	 * Explicit TX flow rule mode. One hairpin pair of queues should have
+	 * the same attribute. The actual support depends on the PMD.
+	 *
+	 * - When set, the user should be responsible for inserting the hairpin
+	 *   TX part flows and removing them.
+	 * - When clear, the PMD will try to handle the TX part of the flows,
+	 *   e.g., by splitting one flow into two parts.
+	 */
+	uint32_t tx_explicit:1;
+
+	/**
+	 * Manually bind hairpin queues. One hairpin pair of queues should have
+	 * the same attribute. The actual support depends on the PMD.
+	 *
+	 * - When set, to enable hairpin, the user should call the hairpin bind
+	 *   API after all the queues are set up properly and the ports are
+	 *   started. Also, the hairpin unbind API should be called accordingly
+	 *   before stopping a port that with hairpin configured.
+	 * - When clear, the PMD will try to enable the hairpin with the queues
+	 *   configured automatically during port start.
+	 */
+	uint32_t manual_bind:1;
+	uint32_t reserved:14; /**< Reserved bits. */
 	struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS];
 };
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  2020-10-13 14:58  0%               ` Michel Machado
@ 2020-10-13 15:41  0%                 ` Medvedkin, Vladimir
  2020-10-13 17:46  0%                   ` Michel Machado
  0 siblings, 1 reply; 200+ results
From: Medvedkin, Vladimir @ 2020-10-13 15:41 UTC (permalink / raw)
  To: Michel Machado, Kevin Traynor, Ruifeng Wang, Bruce Richardson,
	Cody Doucette, Andre Nathan, Qiaobin Fu
  Cc: dev, Honnappa Nagarahalli, nd

Hi Michel,

Could you please describe a condition when LPM gets inconsistent? As I 
can see if there is no free tbl8 it will return -ENOSPC.

On 13/10/2020 15:58, Michel Machado wrote:
> Hi Kevin,
> 
>     We do need fields max_rules and number_tbl8s of struct rte_lpm, so 
> the removal would force us to have another patch to our local copy of 
> DPDK. We'd rather avoid this new local patch because we wish to 
> eventually be in sync with the stock DPDK.
> 
>     Those fields are needed in Gatekeeper because we found a condition 
> in an ongoing deployment in which the entries of some LPM tables may 
> suddenly change a lot to reflect policy changes. To avoid getting into a 
> state in which the LPM table is inconsistent because it cannot fit all 
> the new entries, we compute the needed parameters to support the new 
> entries, and compare with the current parameters. If the current table 
> doesn't fit everything, we have to replace it with a new LPM table.
> 
>     If there were a way to obtain the struct rte_lpm_config of a given 
> LPM table, it would cleanly address our need. We have the same need in 
> IPv6 and have a local patch to work around it (see 
> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f). 
> Thus, an IPv4 and IPv6 solution would be best.
> 
>     PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to this 
> disscussion.
> 
> [ ]'s
> Michel Machado
> 
> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>> Hi Gatekeeper maintainers (I think),
>>
>> fyi - there is a proposal to remove some members of a struct in DPDK LPM
>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
>> as it's an LTS I guess it would probably hit Debian in a few months.
>>
>> The full thread is here:
>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>
>> Maybe you can take a look and tell us if they are needed in Gatekeeper
>> or you can workaround it?
>>
>> thanks,
>> Kevin.
>>
>> [1]
>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248 
>>
>>
>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>
>>>> -----Original Message-----
>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>> <bruce.richardson@intel.com>
>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>
>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>> <Ruifeng.Wang@arm.com>
>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>
>>>>>> Hi Ruifeng,
>>>>>>
>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>>>> be exposed to the user.
>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>> maintainability.
>>>>>>>>
>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>> ---
>>>>>>>>    lib/librte_lpm/rte_lpm.c | 152
>>>>>>>> +++++++++++++++++++++++---------------
>>>>>> -
>>>>>>>>    lib/librte_lpm/rte_lpm.h |   7 --
>>>>>>>>    2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>
>>>>>>> <snip>
>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>
>>>>>>>>    /** @internal LPM structure. */
>>>>>>>>    struct rte_lpm {
>>>>>>>> -    /* LPM metadata. */
>>>>>>>> -    char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
>>>>>>>> -    uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>> -    uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>> -    struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>>> Rule info table. */
>>>>>>>> -
>>>>>>>>        /* LPM Tables. */
>>>>>>>>        struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>                __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>        struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>> -    struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>    };
>>>>>>>>
>>>>>>>
>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>
>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>>>> which as a static inline function will be in the binary and using
>>>>>>> the old structure offsets.]
>>>>>>>
>>>>>>
>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>> without prior notice.
>>>>>>
>>>>> So if the change wants to happen in 20.11, a deprecation notice should
>>>>> have been added in 20.08.
>>>>> I should have added a deprecation notice. This change will have to 
>>>>> wait for
>>>> next ABI update window.
>>>>>
>>>>
>>>> Do you plan to extend? or is this just speculative?
>>> It is speculative.
>>>
>>>>
>>>> A quick scan and there seems to be several projects using some of these
>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>> gatekeeper. I didn't look at the details to see if they are really 
>>>> needed.
>>>>
>>>> Not sure how much notice they'd need or if they update DPDK much, but I
>>>> think it's worth having a closer look as to how they use lpm and 
>>>> what the
>>>> impact to them is.
>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't access 
>>> the members to be hided.
>>> They will not be impacted by this patch.
>>> But Gatekeeper accesses the rte_lpm internal members that to be 
>>> hided. Its compilation will be broken with this patch.
>>>
>>>>
>>>>> Thanks.
>>>>> Ruifeng
>>>>>>>>    /** LPM RCU QSBR configuration structure. */
>>>>>>>> -- 
>>>>>>>> 2.17.1
>>>>>>>>
>>>>>>
>>>>>> -- 
>>>>>> Regards,
>>>>>> Vladimir
>>>
>>

-- 
Regards,
Vladimir

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v5 03/18] eal: rename lcore word choices
  @ 2020-10-13 15:25  1%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-13 15:25 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Anatoly Burakov, Ray Kinsella, Neil Horman,
	Mattias Rönnblom, Harry van Haaren, Bruce Richardson,
	Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy,
	Pallavi Kadam

Replace master lcore with main lcore and
replace slave lcore with worker lcore.

Keep the old functions and macros but mark them as deprecated
for this release.

The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.

Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/rel_notes/deprecation.rst       | 19 -------
 doc/guides/rel_notes/release_20_11.rst     | 11 ++++
 lib/librte_eal/common/eal_common_dynmem.c  | 10 ++--
 lib/librte_eal/common/eal_common_launch.c  | 36 ++++++------
 lib/librte_eal/common/eal_common_lcore.c   |  8 +--
 lib/librte_eal/common/eal_common_options.c | 64 ++++++++++++----------
 lib/librte_eal/common/eal_options.h        |  2 +
 lib/librte_eal/common/eal_private.h        |  6 +-
 lib/librte_eal/common/rte_random.c         |  2 +-
 lib/librte_eal/common/rte_service.c        |  2 +-
 lib/librte_eal/freebsd/eal.c               | 28 +++++-----
 lib/librte_eal/freebsd/eal_thread.c        | 32 +++++------
 lib/librte_eal/include/rte_eal.h           |  4 +-
 lib/librte_eal/include/rte_eal_trace.h     |  4 +-
 lib/librte_eal/include/rte_launch.h        | 60 ++++++++++----------
 lib/librte_eal/include/rte_lcore.h         | 35 ++++++++----
 lib/librte_eal/linux/eal.c                 | 28 +++++-----
 lib/librte_eal/linux/eal_memory.c          | 10 ++--
 lib/librte_eal/linux/eal_thread.c          | 32 +++++------
 lib/librte_eal/rte_eal_version.map         |  2 +-
 lib/librte_eal/windows/eal.c               | 16 +++---
 lib/librte_eal/windows/eal_thread.c        | 30 +++++-----
 22 files changed, 230 insertions(+), 211 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 584e72087934..7271e9ca4d39 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -20,25 +20,6 @@ Deprecation Notices
 * kvargs: The function ``rte_kvargs_process`` will get a new parameter
   for returning key match count. It will ease handling of no-match case.
 
-* eal: To be more inclusive in choice of naming, the DPDK project
-  will replace uses of master/slave in the API's and command line arguments.
-
-  References to master/slave in relation to lcore will be renamed
-  to initial/worker.  The function ``rte_get_master_lcore()``
-  will be renamed to ``rte_get_initial_lcore()``.
-  For the 20.11 release, both names will be present and the
-  old function will be marked with the deprecated tag.
-  The old function will be removed in a future version.
-
-  The iterator for worker lcores will also change:
-  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
-  ``RTE_LCORE_FOREACH_WORKER``.
-
-  The ``master-lcore`` argument to testpmd will be replaced
-  with ``initial-lcore``. The old ``master-lcore`` argument
-  will produce a runtime notification in 20.11 release, and
-  be removed completely in a future release.
-
 * eal: The terms blacklist and whitelist to describe devices used
   by DPDK will be replaced in the 20.11 relase.
   This will apply to command line arguments as well as macros.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index b7881f2e9d5a..8fa0605ad6cb 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -292,6 +292,17 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* eal: Changed the function ``rte_get_master_lcore()`` is
+  replaced to ``rte_get_main_lcore()``. The old function is deprecated.
+
+  The iterator for worker lcores will also change:
+  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
+  ``RTE_LCORE_FOREACH_WORKER``.
+
+  The ``master-lcore`` argument to testpmd will be replaced
+  with ``main-lcore``. The old ``master-lcore`` argument
+  will produce a runtime notification in 20.11 release, and
+  be removed completely in a future release.
 
 ABI Changes
 -----------
diff --git a/lib/librte_eal/common/eal_common_dynmem.c b/lib/librte_eal/common/eal_common_dynmem.c
index 614648d8a4de..1cefe52443c4 100644
--- a/lib/librte_eal/common/eal_common_dynmem.c
+++ b/lib/librte_eal/common/eal_common_dynmem.c
@@ -427,19 +427,19 @@ eal_dynmem_calc_num_pages_per_socket(
 			total_size -= default_size;
 		}
 #else
-		/* in 32-bit mode, allocate all of the memory only on master
+		/* in 32-bit mode, allocate all of the memory only on main
 		 * lcore socket
 		 */
 		total_size = internal_conf->memory;
 		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
 				socket++) {
 			struct rte_config *cfg = rte_eal_get_configuration();
-			unsigned int master_lcore_socket;
+			unsigned int main_lcore_socket;
 
-			master_lcore_socket =
-				rte_lcore_to_socket_id(cfg->master_lcore);
+			main_lcore_socket =
+				rte_lcore_to_socket_id(cfg->main_lcore);
 
-			if (master_lcore_socket != socket)
+			if (main_lcore_socket != socket)
 				continue;
 
 			/* Update sizes */
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index cf52d717f68e..34f854ad80c8 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -21,55 +21,55 @@
  * Wait until a lcore finished its job.
  */
 int
-rte_eal_wait_lcore(unsigned slave_id)
+rte_eal_wait_lcore(unsigned worker_id)
 {
-	if (lcore_config[slave_id].state == WAIT)
+	if (lcore_config[worker_id].state == WAIT)
 		return 0;
 
-	while (lcore_config[slave_id].state != WAIT &&
-	       lcore_config[slave_id].state != FINISHED)
+	while (lcore_config[worker_id].state != WAIT &&
+	       lcore_config[worker_id].state != FINISHED)
 		rte_pause();
 
 	rte_rmb();
 
 	/* we are in finished state, go to wait state */
-	lcore_config[slave_id].state = WAIT;
-	return lcore_config[slave_id].ret;
+	lcore_config[worker_id].state = WAIT;
+	return lcore_config[worker_id].ret;
 }
 
 /*
- * Check that every SLAVE lcores are in WAIT state, then call
- * rte_eal_remote_launch() for all of them. If call_master is true
- * (set to CALL_MASTER), also call the function on the master lcore.
+ * Check that every WORKER lcores are in WAIT state, then call
+ * rte_eal_remote_launch() for all of them. If call_main is true
+ * (set to CALL_MAIN), also call the function on the main lcore.
  */
 int
 rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
-			 enum rte_rmt_call_master_t call_master)
+			 enum rte_rmt_call_main_t call_main)
 {
 	int lcore_id;
-	int master = rte_get_master_lcore();
+	int main_lcore = rte_get_main_lcore();
 
 	/* check state of lcores */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (lcore_config[lcore_id].state != WAIT)
 			return -EBUSY;
 	}
 
 	/* send messages to cores */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		rte_eal_remote_launch(f, arg, lcore_id);
 	}
 
-	if (call_master == CALL_MASTER) {
-		lcore_config[master].ret = f(arg);
-		lcore_config[master].state = FINISHED;
+	if (call_main == CALL_MAIN) {
+		lcore_config[main_lcore].ret = f(arg);
+		lcore_config[main_lcore].state = FINISHED;
 	}
 
 	return 0;
 }
 
 /*
- * Return the state of the lcore identified by slave_id.
+ * Return the state of the lcore identified by worker_id.
  */
 enum rte_lcore_state_t
 rte_eal_get_lcore_state(unsigned lcore_id)
@@ -86,7 +86,7 @@ rte_eal_mp_wait_lcore(void)
 {
 	unsigned lcore_id;
 
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		rte_eal_wait_lcore(lcore_id);
 	}
 }
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index d64569b3c758..66d6bad1a7d7 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -18,9 +18,9 @@
 #include "eal_private.h"
 #include "eal_thread.h"
 
-unsigned int rte_get_master_lcore(void)
+unsigned int rte_get_main_lcore(void)
 {
-	return rte_eal_get_configuration()->master_lcore;
+	return rte_eal_get_configuration()->main_lcore;
 }
 
 unsigned int rte_lcore_count(void)
@@ -93,7 +93,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
 	return cfg->lcore_role[lcore_id] == ROLE_RTE;
 }
 
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
 {
 	i++;
 	if (wrap)
@@ -101,7 +101,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
 
 	while (i < RTE_MAX_LCORE) {
 		if (!rte_lcore_is_enabled(i) ||
-		    (skip_master && (i == rte_get_master_lcore()))) {
+		    (skip_main && (i == rte_get_main_lcore()))) {
 			i++;
 			if (wrap)
 				i %= RTE_MAX_LCORE;
diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index a5426e12346a..d221886eb22c 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -81,6 +81,7 @@ eal_long_options[] = {
 	{OPT_TRACE_BUF_SIZE,    1, NULL, OPT_TRACE_BUF_SIZE_NUM   },
 	{OPT_TRACE_MODE,        1, NULL, OPT_TRACE_MODE_NUM       },
 	{OPT_MASTER_LCORE,      1, NULL, OPT_MASTER_LCORE_NUM     },
+	{OPT_MAIN_LCORE,        1, NULL, OPT_MAIN_LCORE_NUM       },
 	{OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM},
 	{OPT_NO_HPET,           0, NULL, OPT_NO_HPET_NUM          },
 	{OPT_NO_HUGE,           0, NULL, OPT_NO_HUGE_NUM          },
@@ -144,7 +145,7 @@ struct device_option {
 static struct device_option_list devopt_list =
 TAILQ_HEAD_INITIALIZER(devopt_list);
 
-static int master_lcore_parsed;
+static int main_lcore_parsed;
 static int mem_parsed;
 static int core_parsed;
 
@@ -575,12 +576,12 @@ eal_parse_service_coremask(const char *coremask)
 		for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE;
 				j++, idx++) {
 			if ((1 << j) & val) {
-				/* handle master lcore already parsed */
+				/* handle main lcore already parsed */
 				uint32_t lcore = idx;
-				if (master_lcore_parsed &&
-						cfg->master_lcore == lcore) {
+				if (main_lcore_parsed &&
+						cfg->main_lcore == lcore) {
 					RTE_LOG(ERR, EAL,
-						"lcore %u is master lcore, cannot use as service core\n",
+						"lcore %u is main lcore, cannot use as service core\n",
 						idx);
 					return -1;
 				}
@@ -748,12 +749,12 @@ eal_parse_service_corelist(const char *corelist)
 				min = idx;
 			for (idx = min; idx <= max; idx++) {
 				if (cfg->lcore_role[idx] != ROLE_SERVICE) {
-					/* handle master lcore already parsed */
+					/* handle main lcore already parsed */
 					uint32_t lcore = idx;
-					if (cfg->master_lcore == lcore &&
-							master_lcore_parsed) {
+					if (cfg->main_lcore == lcore &&
+							main_lcore_parsed) {
 						RTE_LOG(ERR, EAL,
-							"Error: lcore %u is master lcore, cannot use as service core\n",
+							"Error: lcore %u is main lcore, cannot use as service core\n",
 							idx);
 						return -1;
 					}
@@ -836,25 +837,25 @@ eal_parse_corelist(const char *corelist, int *cores)
 	return 0;
 }
 
-/* Changes the lcore id of the master thread */
+/* Changes the lcore id of the main thread */
 static int
-eal_parse_master_lcore(const char *arg)
+eal_parse_main_lcore(const char *arg)
 {
 	char *parsing_end;
 	struct rte_config *cfg = rte_eal_get_configuration();
 
 	errno = 0;
-	cfg->master_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
+	cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
 	if (errno || parsing_end[0] != 0)
 		return -1;
-	if (cfg->master_lcore >= RTE_MAX_LCORE)
+	if (cfg->main_lcore >= RTE_MAX_LCORE)
 		return -1;
-	master_lcore_parsed = 1;
+	main_lcore_parsed = 1;
 
-	/* ensure master core is not used as service core */
-	if (lcore_config[cfg->master_lcore].core_role == ROLE_SERVICE) {
+	/* ensure main core is not used as service core */
+	if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
 		RTE_LOG(ERR, EAL,
-			"Error: Master lcore is used as a service core\n");
+			"Error: Main lcore is used as a service core\n");
 		return -1;
 	}
 
@@ -1593,9 +1594,14 @@ eal_parse_common_option(int opt, const char *optarg,
 		break;
 
 	case OPT_MASTER_LCORE_NUM:
-		if (eal_parse_master_lcore(optarg) < 0) {
+		fprintf(stderr,
+			"Option --" OPT_MASTER_LCORE
+			" is deprecated use " OPT_MAIN_LCORE "\n");
+		/* fallthrough */
+	case OPT_MAIN_LCORE_NUM:
+		if (eal_parse_main_lcore(optarg) < 0) {
 			RTE_LOG(ERR, EAL, "invalid parameter for --"
-					OPT_MASTER_LCORE "\n");
+					OPT_MAIN_LCORE "\n");
 			return -1;
 		}
 		break;
@@ -1763,9 +1769,9 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
 
 	RTE_CPU_AND(cpuset, cpuset, &default_set);
 
-	/* if no remaining cpu, use master lcore cpu affinity */
+	/* if no remaining cpu, use main lcore cpu affinity */
 	if (!CPU_COUNT(cpuset)) {
-		memcpy(cpuset, &lcore_config[rte_get_master_lcore()].cpuset,
+		memcpy(cpuset, &lcore_config[rte_get_main_lcore()].cpuset,
 			sizeof(*cpuset));
 	}
 }
@@ -1797,12 +1803,12 @@ eal_adjust_config(struct internal_config *internal_cfg)
 	if (internal_conf->process_type == RTE_PROC_AUTO)
 		internal_conf->process_type = eal_proc_type_detect();
 
-	/* default master lcore is the first one */
-	if (!master_lcore_parsed) {
-		cfg->master_lcore = rte_get_next_lcore(-1, 0, 0);
-		if (cfg->master_lcore >= RTE_MAX_LCORE)
+	/* default main lcore is the first one */
+	if (!main_lcore_parsed) {
+		cfg->main_lcore = rte_get_next_lcore(-1, 0, 0);
+		if (cfg->main_lcore >= RTE_MAX_LCORE)
 			return -1;
-		lcore_config[cfg->master_lcore].core_role = ROLE_RTE;
+		lcore_config[cfg->main_lcore].core_role = ROLE_RTE;
 	}
 
 	compute_ctrl_threads_cpuset(internal_cfg);
@@ -1822,8 +1828,8 @@ eal_check_common_options(struct internal_config *internal_cfg)
 	const struct internal_config *internal_conf =
 		eal_get_internal_configuration();
 
-	if (cfg->lcore_role[cfg->master_lcore] != ROLE_RTE) {
-		RTE_LOG(ERR, EAL, "Master lcore is not enabled for DPDK\n");
+	if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+		RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n");
 		return -1;
 	}
 
@@ -1921,7 +1927,7 @@ eal_common_usage(void)
 	       "                      '( )' can be omitted for single element group,\n"
 	       "                      '@' can be omitted if cpus and lcores have the same value\n"
 	       "  -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n"
-	       "  --"OPT_MASTER_LCORE" ID   Core ID that is used as master\n"
+	       "  --"OPT_MAIN_LCORE" ID     Core ID that is used as main\n"
 	       "  --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n"
 	       "  -n CHANNELS         Number of memory channels\n"
 	       "  -m MB               Memory to allocate (see also --"OPT_SOCKET_MEM")\n"
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index 89769d48b487..d363228a7a25 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -43,6 +43,8 @@ enum {
 	OPT_TRACE_BUF_SIZE_NUM,
 #define OPT_TRACE_MODE        "trace-mode"
 	OPT_TRACE_MODE_NUM,
+#define OPT_MAIN_LCORE        "main-lcore"
+	OPT_MAIN_LCORE_NUM,
 #define OPT_MASTER_LCORE      "master-lcore"
 	OPT_MASTER_LCORE_NUM,
 #define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name"
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index a6a6381567f4..4684c4c7df19 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -20,8 +20,8 @@
  */
 struct lcore_config {
 	pthread_t thread_id;       /**< pthread identifier */
-	int pipe_master2slave[2];  /**< communication pipe with master */
-	int pipe_slave2master[2];  /**< communication pipe with master */
+	int pipe_main2worker[2];   /**< communication pipe with main */
+	int pipe_worker2main[2];   /**< communication pipe with main */
 
 	lcore_function_t * volatile f; /**< function to call */
 	void * volatile arg;       /**< argument of function */
@@ -42,7 +42,7 @@ extern struct lcore_config lcore_config[RTE_MAX_LCORE];
  * The global RTE configuration structure.
  */
 struct rte_config {
-	uint32_t master_lcore;       /**< Id of the master lcore */
+	uint32_t main_lcore;         /**< Id of the main lcore */
 	uint32_t lcore_count;        /**< Number of available logical cores. */
 	uint32_t numa_node_count;    /**< Number of detected NUMA nodes. */
 	uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
diff --git a/lib/librte_eal/common/rte_random.c b/lib/librte_eal/common/rte_random.c
index b2c5416b331d..ce21c2242a22 100644
--- a/lib/librte_eal/common/rte_random.c
+++ b/lib/librte_eal/common/rte_random.c
@@ -122,7 +122,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
 	lcore_id = rte_lcore_id();
 
 	if (unlikely(lcore_id == LCORE_ID_ANY))
-		lcore_id = rte_get_master_lcore();
+		lcore_id = rte_get_main_lcore();
 
 	return &rand_states[lcore_id];
 }
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index 98565bbef340..6c955d319ad4 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -107,7 +107,7 @@ rte_service_init(void)
 	struct rte_config *cfg = rte_eal_get_configuration();
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
 		if (lcore_config[i].core_role == ROLE_SERVICE) {
-			if ((unsigned int)i == cfg->master_lcore)
+			if ((unsigned int)i == cfg->main_lcore)
 				continue;
 			rte_service_lcore_add(i);
 			count++;
diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
index ccea60afe77b..d6ea02375025 100644
--- a/lib/librte_eal/freebsd/eal.c
+++ b/lib/librte_eal/freebsd/eal.c
@@ -625,10 +625,10 @@ eal_check_mem_on_local_socket(void)
 	int socket_id;
 	const struct rte_config *config = rte_eal_get_configuration();
 
-	socket_id = rte_lcore_to_socket_id(config->master_lcore);
+	socket_id = rte_lcore_to_socket_id(config->main_lcore);
 
 	if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
-		RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+		RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
 }
 
 
@@ -851,29 +851,29 @@ rte_eal_init(int argc, char **argv)
 	eal_check_mem_on_local_socket();
 
 	if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
-			&lcore_config[config->master_lcore].cpuset) != 0) {
+			&lcore_config[config->main_lcore].cpuset) != 0) {
 		rte_eal_init_alert("Cannot set affinity");
 		rte_errno = EINVAL;
 		return -1;
 	}
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
 
-	RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
-		config->master_lcore, thread_id, cpuset,
+	RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
+		config->main_lcore, thread_id, cpuset,
 		ret == 0 ? "" : "...");
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (pipe(lcore_config[i].pipe_master2slave) < 0)
+		if (pipe(lcore_config[i].pipe_main2worker) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (pipe(lcore_config[i].pipe_slave2master) < 0)
+		if (pipe(lcore_config[i].pipe_worker2main) < 0)
 			rte_panic("Cannot create pipe\n");
 
 		lcore_config[i].state = WAIT;
@@ -886,7 +886,7 @@ rte_eal_init(int argc, char **argv)
 
 		/* Set thread_name for aid in debugging. */
 		snprintf(thread_name, sizeof(thread_name),
-				"lcore-slave-%d", i);
+				"lcore-worker-%d", i);
 		rte_thread_setname(lcore_config[i].thread_id, thread_name);
 
 		ret = pthread_setaffinity_np(lcore_config[i].thread_id,
@@ -896,10 +896,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 
 	/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/freebsd/eal_thread.c b/lib/librte_eal/freebsd/eal_thread.c
index 99b5fefc4c5b..1dce9b04f24a 100644
--- a/lib/librte_eal/freebsd/eal_thread.c
+++ b/lib/librte_eal/freebsd/eal_thread.c
@@ -26,35 +26,35 @@
 #include "eal_thread.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 	int rc = -EBUSY;
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		goto finish;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = write(m2s, &c, 1);
+		n = write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = read(s2m, &c, 1);
+		n = read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
 
 	rc = 0;
 finish:
-	rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+	rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
 	return rc;
 }
 
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
 	int n, ret;
 	unsigned lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
 
 		/* wait command */
 		do {
-			n = read(m2s, &c, 1);
+			n = read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = write(s2m, &c, 1);
+			n = write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index e3c2ef185eed..0ae12cf4fbac 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -65,11 +65,11 @@ int rte_eal_iopl_init(void);
 /**
  * Initialize the Environment Abstraction Layer (EAL).
  *
- * This function is to be executed on the MASTER lcore only, as soon
+ * This function is to be executed on the MAIN lcore only, as soon
  * as possible in the application's main() function.
  *
  * The function finishes the initialization process before main() is called.
- * It puts the SLAVE lcores in the WAIT state.
+ * It puts the WORKER lcores in the WAIT state.
  *
  * When the multi-partition feature is supported, depending on the
  * configuration (if CONFIG_RTE_EAL_MAIN_PARTITION is disabled), this
diff --git a/lib/librte_eal/include/rte_eal_trace.h b/lib/librte_eal/include/rte_eal_trace.h
index 19df549d29be..495ae1ee1d61 100644
--- a/lib/librte_eal/include/rte_eal_trace.h
+++ b/lib/librte_eal/include/rte_eal_trace.h
@@ -264,10 +264,10 @@ RTE_TRACE_POINT(
 RTE_TRACE_POINT(
 	rte_eal_trace_thread_remote_launch,
 	RTE_TRACE_POINT_ARGS(int (*f)(void *), void *arg,
-		unsigned int slave_id, int rc),
+		unsigned int worker_id, int rc),
 	rte_trace_point_emit_ptr(f);
 	rte_trace_point_emit_ptr(arg);
-	rte_trace_point_emit_u32(slave_id);
+	rte_trace_point_emit_u32(worker_id);
 	rte_trace_point_emit_int(rc);
 )
 RTE_TRACE_POINT(
diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h
index 06a671752ace..22a901ce62f6 100644
--- a/lib/librte_eal/include/rte_launch.h
+++ b/lib/librte_eal/include/rte_launch.h
@@ -32,12 +32,12 @@ typedef int (lcore_function_t)(void *);
 /**
  * Launch a function on another lcore.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * Sends a message to a slave lcore (identified by the slave_id) that
+ * Sends a message to a worker lcore (identified by the worker_id) that
  * is in the WAIT state (this is true after the first call to
  * rte_eal_init()). This can be checked by first calling
- * rte_eal_wait_lcore(slave_id).
+ * rte_eal_wait_lcore(worker_id).
  *
  * When the remote lcore receives the message, it switches to
  * the RUNNING state, then calls the function f with argument arg. Once the
@@ -45,7 +45,7 @@ typedef int (lcore_function_t)(void *);
  * the return value of f is stored in a local variable to be read using
  * rte_eal_wait_lcore().
  *
- * The MASTER lcore returns as soon as the message is sent and knows
+ * The MAIN lcore returns as soon as the message is sent and knows
  * nothing about the completion of f.
  *
  * Note: This function is not designed to offer optimum
@@ -56,37 +56,41 @@ typedef int (lcore_function_t)(void *);
  *   The function to be called.
  * @param arg
  *   The argument for the function.
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore on which the function should be executed.
  * @return
  *   - 0: Success. Execution of function f started on the remote lcore.
  *   - (-EBUSY): The remote lcore is not in a WAIT state.
  */
-int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id);
+int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned worker_id);
 
 /**
- * This enum indicates whether the master core must execute the handler
+ * This enum indicates whether the main core must execute the handler
  * launched on all logical cores.
  */
-enum rte_rmt_call_master_t {
-	SKIP_MASTER = 0, /**< lcore handler not executed by master core. */
-	CALL_MASTER,     /**< lcore handler executed by master core. */
+enum rte_rmt_call_main_t {
+	SKIP_MAIN = 0, /**< lcore handler not executed by main core. */
+	CALL_MAIN,     /**< lcore handler executed by main core. */
 };
 
+/* These legacy definitions will be removed in future release */
+#define SKIP_MASTER	RTE_DEPRECATED(SKIP_MASTER) SKIP_MAIN
+#define CALL_MASTER	RTE_DEPRECATED(CALL_MASTER) CALL_MAIN
+
 /**
  * Launch a function on all lcores.
  *
- * Check that each SLAVE lcore is in a WAIT state, then call
+ * Check that each WORKER lcore is in a WAIT state, then call
  * rte_eal_remote_launch() for each lcore.
  *
  * @param f
  *   The function to be called.
  * @param arg
  *   The argument for the function.
- * @param call_master
- *   If call_master set to SKIP_MASTER, the MASTER lcore does not call
- *   the function. If call_master is set to CALL_MASTER, the function
- *   is also called on master before returning. In any case, the master
+ * @param call_main
+ *   If call_main set to SKIP_MAIN, the MAIN lcore does not call
+ *   the function. If call_main is set to CALL_MAIN, the function
+ *   is also called on main before returning. In any case, the main
  *   lcore returns as soon as it finished its job and knows nothing
  *   about the completion of f on the other lcores.
  * @return
@@ -95,49 +99,49 @@ enum rte_rmt_call_master_t {
  *     case, no message is sent to any of the lcores.
  */
 int rte_eal_mp_remote_launch(lcore_function_t *f, void *arg,
-			     enum rte_rmt_call_master_t call_master);
+			     enum rte_rmt_call_main_t call_main);
 
 /**
- * Get the state of the lcore identified by slave_id.
+ * Get the state of the lcore identified by worker_id.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore.
  * @return
  *   The state of the lcore.
  */
-enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned slave_id);
+enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned int worker_id);
 
 /**
  * Wait until an lcore finishes its job.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * If the slave lcore identified by the slave_id is in a FINISHED state,
+ * If the worker lcore identified by the worker_id is in a FINISHED state,
  * switch to the WAIT state. If the lcore is in RUNNING state, wait until
  * the lcore finishes its job and moves to the FINISHED state.
  *
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore.
  * @return
- *   - 0: If the lcore identified by the slave_id is in a WAIT state.
+ *   - 0: If the lcore identified by the worker_id is in a WAIT state.
  *   - The value that was returned by the previous remote launch
- *     function call if the lcore identified by the slave_id was in a
+ *     function call if the lcore identified by the worker_id was in a
  *     FINISHED or RUNNING state. In this case, it changes the state
  *     of the lcore to WAIT.
  */
-int rte_eal_wait_lcore(unsigned slave_id);
+int rte_eal_wait_lcore(unsigned worker_id);
 
 /**
  * Wait until all lcores finish their jobs.
  *
- * To be executed on the MASTER lcore only. Issue an
+ * To be executed on the MAIN lcore only. Issue an
  * rte_eal_wait_lcore() for every lcore. The return values are
  * ignored.
  *
  * After a call to rte_eal_mp_wait_lcore(), the caller can assume
- * that all slave lcores are in a WAIT state.
+ * that all worker lcores are in a WAIT state.
  */
 void rte_eal_mp_wait_lcore(void);
 
diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h
index b8b64a625200..48b87e253afa 100644
--- a/lib/librte_eal/include/rte_lcore.h
+++ b/lib/librte_eal/include/rte_lcore.h
@@ -78,12 +78,24 @@ rte_lcore_id(void)
 }
 
 /**
- * Get the id of the master lcore
+ * Get the id of the main lcore
  *
  * @return
- *   the id of the master lcore
+ *   the id of the main lcore
  */
-unsigned int rte_get_master_lcore(void);
+unsigned int rte_get_main_lcore(void);
+
+/**
+ * Deprecated function the id of the main lcore
+ *
+ * @return
+ *   the id of the main lcore
+ */
+__rte_deprecated
+static inline unsigned int rte_get_master_lcore(void)
+{
+	return rte_get_main_lcore();
+}
 
 /**
  * Return the number of execution units (lcores) on the system.
@@ -203,32 +215,35 @@ int rte_lcore_is_enabled(unsigned int lcore_id);
  *
  * @param i
  *   The current lcore (reference).
- * @param skip_master
- *   If true, do not return the ID of the master lcore.
+ * @param skip_main
+ *   If true, do not return the ID of the main lcore.
  * @param wrap
  *   If true, go back to 0 when RTE_MAX_LCORE is reached; otherwise,
  *   return RTE_MAX_LCORE.
  * @return
  *   The next lcore_id or RTE_MAX_LCORE if not found.
  */
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap);
 
 /**
  * Macro to browse all running lcores.
  */
 #define RTE_LCORE_FOREACH(i)						\
 	for (i = rte_get_next_lcore(-1, 0, 0);				\
-	     i<RTE_MAX_LCORE;						\
+	     i < RTE_MAX_LCORE;						\
 	     i = rte_get_next_lcore(i, 0, 0))
 
 /**
- * Macro to browse all running lcores except the master lcore.
+ * Macro to browse all running lcores except the main lcore.
  */
-#define RTE_LCORE_FOREACH_SLAVE(i)					\
+#define RTE_LCORE_FOREACH_WORKER(i)					\
 	for (i = rte_get_next_lcore(-1, 1, 0);				\
-	     i<RTE_MAX_LCORE;						\
+	     i < RTE_MAX_LCORE;						\
 	     i = rte_get_next_lcore(i, 1, 0))
 
+#define RTE_LCORE_FOREACH_SLAVE(l)					\
+	RTE_DEPRECATED(RTE_LCORE_FOREACH_SLAVE) RTE_LCORE_FOREACH_WORKER(l)
+
 /**
  * Callback prototype for initializing lcores.
  *
diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
index 9cf0e2ec0137..1c9dd8db1e6a 100644
--- a/lib/librte_eal/linux/eal.c
+++ b/lib/librte_eal/linux/eal.c
@@ -883,10 +883,10 @@ eal_check_mem_on_local_socket(void)
 	int socket_id;
 	const struct rte_config *config = rte_eal_get_configuration();
 
-	socket_id = rte_lcore_to_socket_id(config->master_lcore);
+	socket_id = rte_lcore_to_socket_id(config->main_lcore);
 
 	if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
-		RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+		RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
 }
 
 static int
@@ -1215,28 +1215,28 @@ rte_eal_init(int argc, char **argv)
 	eal_check_mem_on_local_socket();
 
 	if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
-			&lcore_config[config->master_lcore].cpuset) != 0) {
+			&lcore_config[config->main_lcore].cpuset) != 0) {
 		rte_eal_init_alert("Cannot set affinity");
 		rte_errno = EINVAL;
 		return -1;
 	}
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
-	RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
-		config->master_lcore, (uintptr_t)thread_id, cpuset,
+	RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+		config->main_lcore, (uintptr_t)thread_id, cpuset,
 		ret == 0 ? "" : "...");
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (pipe(lcore_config[i].pipe_master2slave) < 0)
+		if (pipe(lcore_config[i].pipe_main2worker) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (pipe(lcore_config[i].pipe_slave2master) < 0)
+		if (pipe(lcore_config[i].pipe_worker2main) < 0)
 			rte_panic("Cannot create pipe\n");
 
 		lcore_config[i].state = WAIT;
@@ -1249,7 +1249,7 @@ rte_eal_init(int argc, char **argv)
 
 		/* Set thread_name for aid in debugging. */
 		snprintf(thread_name, sizeof(thread_name),
-			"lcore-slave-%d", i);
+			"lcore-worker-%d", i);
 		ret = rte_thread_setname(lcore_config[i].thread_id,
 						thread_name);
 		if (ret != 0)
@@ -1263,10 +1263,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 
 	/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 89725291b0ce..3e47efe58212 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -1737,7 +1737,7 @@ memseg_primary_init_32(void)
 	/* the allocation logic is a little bit convoluted, but here's how it
 	 * works, in a nutshell:
 	 *  - if user hasn't specified on which sockets to allocate memory via
-	 *    --socket-mem, we allocate all of our memory on master core socket.
+	 *    --socket-mem, we allocate all of our memory on main core socket.
 	 *  - if user has specified sockets to allocate memory on, there may be
 	 *    some "unused" memory left (e.g. if user has specified --socket-mem
 	 *    such that not all memory adds up to 2 gigabytes), so add it to all
@@ -1751,7 +1751,7 @@ memseg_primary_init_32(void)
 	for (i = 0; i < rte_socket_count(); i++) {
 		int hp_sizes = (int) internal_conf->num_hugepage_sizes;
 		uint64_t max_socket_mem, cur_socket_mem;
-		unsigned int master_lcore_socket;
+		unsigned int main_lcore_socket;
 		struct rte_config *cfg = rte_eal_get_configuration();
 		bool skip;
 
@@ -1767,10 +1767,10 @@ memseg_primary_init_32(void)
 		skip = active_sockets != 0 &&
 				internal_conf->socket_mem[socket_id] == 0;
 		/* ...or if we didn't specifically request memory on *any*
-		 * socket, and this is not master lcore
+		 * socket, and this is not main lcore
 		 */
-		master_lcore_socket = rte_lcore_to_socket_id(cfg->master_lcore);
-		skip |= active_sockets == 0 && socket_id != master_lcore_socket;
+		main_lcore_socket = rte_lcore_to_socket_id(cfg->main_lcore);
+		skip |= active_sockets == 0 && socket_id != main_lcore_socket;
 
 		if (skip) {
 			RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n",
diff --git a/lib/librte_eal/linux/eal_thread.c b/lib/librte_eal/linux/eal_thread.c
index 068de2559555..83c2034b93d5 100644
--- a/lib/librte_eal/linux/eal_thread.c
+++ b/lib/librte_eal/linux/eal_thread.c
@@ -26,35 +26,35 @@
 #include "eal_thread.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned int worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 	int rc = -EBUSY;
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		goto finish;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = write(m2s, &c, 1);
+		n = write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = read(s2m, &c, 1);
+		n = read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
 
 	rc = 0;
 finish:
-	rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+	rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
 	return rc;
 }
 
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
 	int n, ret;
 	unsigned lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
 
 		/* wait command */
 		do {
-			n = read(m2s, &c, 1);
+			n = read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = write(s2m, &c, 1);
+			n = write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index a93dea9fe616..33ee2748ede0 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -74,7 +74,7 @@ DPDK_21 {
 	rte_free;
 	rte_get_hpet_cycles;
 	rte_get_hpet_hz;
-	rte_get_master_lcore;
+	rte_get_main_lcore;
 	rte_get_next_lcore;
 	rte_get_tsc_hz;
 	rte_hexdump;
diff --git a/lib/librte_eal/windows/eal.c b/lib/librte_eal/windows/eal.c
index bc48f27ab39a..cbca20956210 100644
--- a/lib/librte_eal/windows/eal.c
+++ b/lib/librte_eal/windows/eal.c
@@ -350,8 +350,8 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	bscan = rte_bus_scan();
 	if (bscan < 0) {
@@ -360,16 +360,16 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (_pipe(lcore_config[i].pipe_master2slave,
+		if (_pipe(lcore_config[i].pipe_main2worker,
 			sizeof(char), _O_BINARY) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (_pipe(lcore_config[i].pipe_slave2master,
+		if (_pipe(lcore_config[i].pipe_worker2main,
 			sizeof(char), _O_BINARY) < 0)
 			rte_panic("Cannot create pipe\n");
 
@@ -394,10 +394,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 	return fctret;
 }
diff --git a/lib/librte_eal/windows/eal_thread.c b/lib/librte_eal/windows/eal_thread.c
index 20889b6196c9..908e726d16cc 100644
--- a/lib/librte_eal/windows/eal_thread.c
+++ b/lib/librte_eal/windows/eal_thread.c
@@ -17,34 +17,34 @@
 #include "eal_windows.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id)
+rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		return -EBUSY;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = _write(m2s, &c, 1);
+		n = _write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = _read(s2m, &c, 1);
+		n = _read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -61,21 +61,21 @@ eal_thread_loop(void *arg __rte_unused)
 	int n, ret;
 	unsigned int lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -88,7 +88,7 @@ eal_thread_loop(void *arg __rte_unused)
 
 		/* wait command */
 		do {
-			n = _read(m2s, &c, 1);
+			n = _read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -99,7 +99,7 @@ eal_thread_loop(void *arg __rte_unused)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = _write(s2m, &c, 1);
+			n = _write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
-- 
2.27.0


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth
  2020-10-12 23:08  0%         ` Dharmappa, Savinay
@ 2020-10-13 13:56  0%           ` Dharmappa, Savinay
  0 siblings, 0 replies; 200+ results
From: Dharmappa, Savinay @ 2020-10-13 13:56 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Dumitrescu, Cristian, Singh, Jasvinder, dev



-----Original Message-----
From: Dharmappa, Savinay 
Sent: Tuesday, October 13, 2020 4:39 AM
To: 'Thomas Monjalon' <thomas@monjalon.net>
Cc: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Singh, Jasvinder <jasvinder.singh@intel.com>; 'dev@dpdk.org' <dev@dpdk.org>
Subject: RE: [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth

09/10/2020 14:39, Savinay Dharmappa:
> DPDK sched library allows runtime configuration of the pipe profiles 
> to the pipes of the subport once scheduler hierarchy is constructed.
> However, to change the subport level bandwidth, existing hierarchy 
> needs to be dismantled and whole process of building hierarchy under 
> subport nodes needs to be repeated which might result in router 
> downtime. Furthermore, due to lack of dynamic configuration of the 
> subport bandwidth profile configuration (shaper and Traffic class 
> rates), the user application is unable to dynamically re-distribute 
> the excess-bandwidth of one subport among other subports in the 
> scheduler hierarchy. Therefore, it is also not possible to adjust the 
> subport bandwidth profile in sync with dynamic changes in pipe 
> profiles of subscribers who want to consume higher bandwidth opportunistically.
> 
> This patch series implements dynamic configuration of the subport 
> bandwidth profile to overcome the runtime situation when group of 
> subscribers are not using the allotted bandwidth and dynamic bandwidth 
> re-distribution is needed the without making any structural changes in the hierarchy.
> 
> The implementation work includes refactoring the existing api and data 
> structures defined for port and subport level, new APIs for adding 
> subport level bandwidth profiles that can be used in runtime.
> 
> ---
> v8 -> v9
>    - updated ABI section in release notes.
>    - Addressed review comments from patch 8
>      of v8.

I was asking a question in my reply to v8 but you didn't hit the "reply" button.
>> sorry for that. All the question raised by you were relevant so I addressed them and sent out v9. 

One more question: why don't you keep the ack given by Cristian in v7?
>> I am carrying ack given Cristian in v9, but It is at the bottom of cover letter.
>>  should I resend the patch  placing ack  just before version changes info?

Hi Thomas,

Could you please let me know  regarding resending the patch ?


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  2020-10-09  6:54  0%           ` Ruifeng Wang
@ 2020-10-13 13:53  0%             ` Kevin Traynor
  2020-10-13 14:58  0%               ` Michel Machado
  0 siblings, 1 reply; 200+ results
From: Kevin Traynor @ 2020-10-13 13:53 UTC (permalink / raw)
  To: Ruifeng Wang, Medvedkin, Vladimir, Bruce Richardson,
	Michel Machado, Cody Doucette, Andre Nathan
  Cc: dev, Honnappa Nagarahalli, nd

Hi Gatekeeper maintainers (I think),

fyi - there is a proposal to remove some members of a struct in DPDK LPM
API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
as it's an LTS I guess it would probably hit Debian in a few months.

The full thread is here:
http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/

Maybe you can take a look and tell us if they are needed in Gatekeeper
or you can workaround it?

thanks,
Kevin.

[1]
https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248

On 09/10/2020 07:54, Ruifeng Wang wrote:
> 
>> -----Original Message-----
>> From: Kevin Traynor <ktraynor@redhat.com>
>> Sent: Wednesday, September 30, 2020 4:46 PM
>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>> <bruce.richardson@intel.com>
>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>
>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>
>>>> -----Original Message-----
>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>> <Ruifeng.Wang@arm.com>
>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>
>>>> Hi Ruifeng,
>>>>
>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>> be exposed to the user.
>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>> maintainability.
>>>>>>
>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>> ---
>>>>>>   lib/librte_lpm/rte_lpm.c | 152
>>>>>> +++++++++++++++++++++++---------------
>>>> -
>>>>>>   lib/librte_lpm/rte_lpm.h |   7 --
>>>>>>   2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>
>>>>> <snip>
>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>> index 03da2d37e..112d96f37 100644
>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>
>>>>>>   /** @internal LPM structure. */
>>>>>>   struct rte_lpm {
>>>>>> -	/* LPM metadata. */
>>>>>> -	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
>>>>>> -	uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>> -	uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>> -	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>> Rule info table. */
>>>>>> -
>>>>>>   	/* LPM Tables. */
>>>>>>   	struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>   			__rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>   	struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>> -	struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>   };
>>>>>>
>>>>>
>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>
>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>> which as a static inline function will be in the binary and using
>>>>> the old structure offsets.]
>>>>>
>>>>
>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>> without prior notice.
>>>>
>>> So if the change wants to happen in 20.11, a deprecation notice should
>>> have been added in 20.08.
>>> I should have added a deprecation notice. This change will have to wait for
>> next ABI update window.
>>>
>>
>> Do you plan to extend? or is this just speculative?
> It is speculative.
> 
>>
>> A quick scan and there seems to be several projects using some of these
>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>> gatekeeper. I didn't look at the details to see if they are really needed.
>>
>> Not sure how much notice they'd need or if they update DPDK much, but I
>> think it's worth having a closer look as to how they use lpm and what the
>> impact to them is.
> Checked the projects listed above. BESS, NFF-Go and DPVS don't access the members to be hided.
> They will not be impacted by this patch.
> But Gatekeeper accesses the rte_lpm internal members that to be hided. Its compilation will be broken with this patch.
> 
>>
>>> Thanks.
>>> Ruifeng
>>>>>>   /** LPM RCU QSBR configuration structure. */
>>>>>> --
>>>>>> 2.17.1
>>>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Vladimir
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6 1/5] ethdev: add extensions attributes to IPv6 item
  2020-10-13 13:32  3%         ` [dpdk-dev] [PATCH v6 0/5] support match on L3 fragmented packets Dekel Peled
@ 2020-10-13 13:32  4%           ` Dekel Peled
  0 siblings, 0 replies; 200+ results
From: Dekel Peled @ 2020-10-13 13:32 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

Using the current implementation of DPDK, an application cannot match on
IPv6 packets, based on the existing extension headers, in a simple way.

Field 'Next Header' in IPv6 header indicates type of the first extension
header only. Following extension headers can't be identified by
inspecting the IPv6 header.
As a result, the existence or absence of specific extension headers
can't be used for packet matching.

For example, fragmented IPv6 packets contain a dedicated extension header
(which is implemented in a later patch of this series).
Non-fragmented packets don't contain the fragment extension header.
For an application to match on non-fragmented IPv6 packets, the current
implementation doesn't provide a suitable solution.
Matching on the Next Header field is not sufficient, since additional
extension headers might be present in the same packet.
To match on fragmented IPv6 packets, the same difficulty exists.

This patch implements the update as detailed in RFC [1].
A set of additional values will be added to IPv6 header struct.
These values will indicate the existence of every defined extension
header type, providing simple means for identification of existing
extensions in the packet header.
Continuing the above example, fragmented packets can be identified using
the specific value indicating existence of fragment extension header.
To match on non-fragmented IPv6 packets, need to use has_frag_ext 0.
To match on fragmented IPv6 packets, need to use has_frag_ext 1.
To match on any IPv6 packets, the has_frag_ext field should
not be specified for match.

[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 20 +++++++++++++++++---
 doc/guides/rel_notes/release_20_11.rst |  5 +++++
 lib/librte_ethdev/rte_flow.h           | 23 +++++++++++++++++++++--
 3 files changed, 43 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 119b128..e0d7f42 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -946,11 +946,25 @@ Item: ``IPV6``
 
 Matches an IPv6 header.
 
-Note: IPv6 options are handled by dedicated pattern items, see `Item:
-IPV6_EXT`_.
+Dedicated flags indicate if header contains specific extension headers.
+To match on packets containing a specific extension header, an application
+should match on the dedicated flag set to 1.
+To match on packets not containing a specific extension header, an application
+should match on the dedicated flag clear to 0.
+In case application doesn't care about the existence of a specific extension
+header, it should not specify the dedicated flag for matching.
 
 - ``hdr``: IPv6 header definition (``rte_ip.h``).
-- Default ``mask`` matches source and destination addresses only.
+- ``has_hop_ext``: header contains Hop-by-Hop Options extension header.
+- ``has_route_ext``: header contains Routing extension header.
+- ``has_frag_ext``: header contains Fragment extension header.
+- ``has_auth_ext``: header contains Authentication extension header.
+- ``has_esp_ext``: header contains Encapsulation Security Payload extension header.
+- ``has_dest_ext``: header contains Destination Options extension header.
+- ``has_mobil_ext``: header contains Mobility extension header.
+- ``has_hip_ext``: header contains Host Identity Protocol extension header.
+- ``has_shim6_ext``: header contains Shim6 Protocol extension header.
+- Default ``mask`` matches ``hdr`` source and destination addresses only.
 
 Item: ``ICMP``
 ^^^^^^^^^^^^^^
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index bcc0fc2..a01552c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -314,6 +314,11 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * Added extensions' attributes to struct ``rte_flow_item_ipv6``.
+    A set of additional values added to struct, indicating the existence of
+    every defined extension header type.
+    Applications should use the new values for identification of existing
+    extensions in the packet header.
 
 Known Issues
 ------------
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index da8bfa5..33d2e8f 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -792,11 +792,30 @@ struct rte_flow_item_ipv4 {
  *
  * Matches an IPv6 header.
  *
- * Note: IPv6 options are handled by dedicated pattern items, see
- * RTE_FLOW_ITEM_TYPE_IPV6_EXT.
+ * Dedicated flags indicate if header contains specific extension headers.
  */
 struct rte_flow_item_ipv6 {
 	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
+	uint32_t has_hop_ext:1;
+	/**< Header contains Hop-by-Hop Options extension header. */
+	uint32_t has_route_ext:1;
+	/**< Header contains Routing extension header. */
+	uint32_t has_frag_ext:1;
+	/**< Header contains Fragment extension header. */
+	uint32_t has_auth_ext:1;
+	/**< Header contains Authentication extension header. */
+	uint32_t has_esp_ext:1;
+	/**< Header contains Encapsulation Security Payload extension header. */
+	uint32_t has_dest_ext:1;
+	/**< Header contains Destination Options extension header. */
+	uint32_t has_mobil_ext:1;
+	/**< Header contains Mobility extension header. */
+	uint32_t has_hip_ext:1;
+	/**< Header contains Host Identity Protocol extension header. */
+	uint32_t has_shim6_ext:1;
+	/**< Header contains Shim6 Protocol extension header. */
+	uint32_t reserved:23;
+	/**< Reserved for future extension headers, must be zero. */
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_IPV6. */
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v6 0/5] support match on L3 fragmented packets
  2020-10-12 10:42  3%       ` [dpdk-dev] [PATCH v5 " Dekel Peled
  2020-10-12 10:43  8%         ` [dpdk-dev] [PATCH v5 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
@ 2020-10-13 13:32  3%         ` Dekel Peled
  2020-10-13 13:32  4%           ` [dpdk-dev] [PATCH v6 1/5] ethdev: add extensions attributes to IPv6 item Dekel Peled
  1 sibling, 1 reply; 200+ results
From: Dekel Peled @ 2020-10-13 13:32 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This series implements support of matching on packets based on the
fragmentation attribute of the packet, i.e. if packet is a fragment
of a larger packet, or the opposite - packet is not a fragment.

In ethdev, add API to support IPv6 extension headers, and specifically
the IPv6 fragment extension header item.
Testpmd CLI is updated accordingly.
Documentation is updated accordingly.

---
v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid ABI breakage.
v4: update rte_flow documentation to clarify use of IPv6 extension header flags.
v5: update following rebase on recent ICMP changes.
v6: - move MLX5 PMD patches to separate series.
    - rename IPv6 extension flags for clarity (e.g. frag_ext_exist renamed to has_frag_ext).
---

*** BLURB HERE ***

Dekel Peled (5):
  ethdev: add extensions attributes to IPv6 item
  ethdev: add IPv6 fragment extension header item
  app/testpmd: support IPv4 fragments
  app/testpmd: support IPv6 fragments
  app/testpmd: support IPv6 fragment extension item

 app/test-pmd/cmdline_flow.c            | 53 ++++++++++++++++++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst     | 32 ++++++++++++++++++--
 doc/guides/rel_notes/release_20_11.rst |  5 ++++
 lib/librte_ethdev/rte_flow.c           |  1 +
 lib/librte_ethdev/rte_flow.h           | 43 +++++++++++++++++++++++++--
 lib/librte_ip_frag/rte_ip_frag.h       | 26 ++---------------
 lib/librte_net/rte_ip.h                | 26 +++++++++++++++--
 7 files changed, 155 insertions(+), 31 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC v2 1/1] lib/ring: add scatter gather APIs
  2020-10-12 22:31  4%       ` Honnappa Nagarahalli
@ 2020-10-13 11:38  0%         ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2020-10-13 11:38 UTC (permalink / raw)
  To: Honnappa Nagarahalli, dev; +Cc: olivier.matz, david.marchand, nd, nd


Hi Honnappa,

> Hi Konstantin,
> 	Appreciate your feedback.
> 
> <snip>
> 
> >
> >
> > > Add scatter gather APIs to avoid intermediate memcpy. Use cases that
> > > involve copying large amount of data to/from the ring can benefit from
> > > these APIs.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > ---
> > >  lib/librte_ring/meson.build        |   3 +-
> > >  lib/librte_ring/rte_ring_elem.h    |   1 +
> > >  lib/librte_ring/rte_ring_peek_sg.h | 552
> > > +++++++++++++++++++++++++++++
> > >  3 files changed, 555 insertions(+), 1 deletion(-)  create mode 100644
> > > lib/librte_ring/rte_ring_peek_sg.h
> >
> > As a generic one - need to update ring UT both func and perf to
> > test/measure this new API.
> Yes, will add.
> 
> >
> > >
> > > diff --git a/lib/librte_ring/meson.build b/lib/librte_ring/meson.build
> > > index 31c0b4649..377694713 100644
> > > --- a/lib/librte_ring/meson.build
> > > +++ b/lib/librte_ring/meson.build
> > > @@ -12,4 +12,5 @@ headers = files('rte_ring.h',
> > >  		'rte_ring_peek.h',
> > >  		'rte_ring_peek_c11_mem.h',
> > >  		'rte_ring_rts.h',
> > > -		'rte_ring_rts_c11_mem.h')
> > > +		'rte_ring_rts_c11_mem.h',
> > > +		'rte_ring_peek_sg.h')
> > > diff --git a/lib/librte_ring/rte_ring_elem.h
> > > b/lib/librte_ring/rte_ring_elem.h index 938b398fc..7d3933f15 100644
> > > --- a/lib/librte_ring/rte_ring_elem.h
> > > +++ b/lib/librte_ring/rte_ring_elem.h
> > > @@ -1079,6 +1079,7 @@ rte_ring_dequeue_burst_elem(struct rte_ring *r,
> > > void *obj_table,
> > >
> > >  #ifdef ALLOW_EXPERIMENTAL_API
> > >  #include <rte_ring_peek.h>
> > > +#include <rte_ring_peek_sg.h>
> > >  #endif
> > >
> > >  #include <rte_ring.h>
> > > diff --git a/lib/librte_ring/rte_ring_peek_sg.h
> > > b/lib/librte_ring/rte_ring_peek_sg.h
> > > new file mode 100644
> > > index 000000000..97d5764a6
> > > --- /dev/null
> > > +++ b/lib/librte_ring/rte_ring_peek_sg.h
> > > @@ -0,0 +1,552 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + *
> > > + * Copyright (c) 2020 Arm
> > > + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org
> > > + * All rights reserved.
> > > + * Derived from FreeBSD's bufring.h
> > > + * Used as BSD-3 Licensed with permission from Kip Macy.
> > > + */
> > > +
> > > +#ifndef _RTE_RING_PEEK_SG_H_
> > > +#define _RTE_RING_PEEK_SG_H_
> > > +
> > > +/**
> > > + * @file
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + * It is not recommended to include this file directly.
> > > + * Please include <rte_ring_elem.h> instead.
> > > + *
> > > + * Ring Peek Scatter Gather APIs
> > > + * Introduction of rte_ring with scatter gather serialized
> > > +producer/consumer
> > > + * (HTS sync mode) makes it possible to split public enqueue/dequeue
> > > +API
> > > + * into 3 phases:
> > > + * - enqueue/dequeue start
> > > + * - copy data to/from the ring
> > > + * - enqueue/dequeue finish
> > > + * Along with the advantages of the peek APIs, these APIs provide the
> > > +ability
> > > + * to avoid copying of the data to temporary area.
> > > + *
> > > + * Note that right now this new API is available only for two sync modes:
> > > + * 1) Single Producer/Single Consumer (RTE_RING_SYNC_ST)
> > > + * 2) Serialized Producer/Serialized Consumer (RTE_RING_SYNC_MT_HTS).
> > > + * It is a user responsibility to create/init ring with appropriate
> > > +sync
> > > + * modes selected.
> > > + *
> > > + * Example usage:
> > > + * // read 1 elem from the ring:
> > > + * n = rte_ring_enqueue_sg_bulk_start(ring, 32, &sgd, NULL);
> > > + * if (n != 0) {
> > > + *	//Copy objects in the ring
> > > + *	memcpy (sgd->ptr1, obj, sgd->n1 * sizeof(uintptr_t));
> > > + *	if (n != sgd->n1)
> > > + *		//Second memcpy because of wrapround
> > > + *		n2 = n - sgd->n1;
> > > + *		memcpy (sgd->ptr2, obj[n2], n2 * sizeof(uintptr_t));
> > > + *	rte_ring_dequeue_sg_finish(ring, n);
> >
> > It is not clear from the example above why do you need SG(ZC) API.
> > Existing peek API would be able to handle such situation (just copy will be
> > done internally). Probably better to use examples you provided in your last
> > reply to Olivier.
> Agree, not a good example, will change it.
> 
> >
> > > + * }
> > > + *
> > > + * Note that between _start_ and _finish_ none other thread can
> > > + proceed
> > > + * with enqueue(/dequeue) operation till _finish_ completes.
> > > + */
> > > +
> > > +#ifdef __cplusplus
> > > +extern "C" {
> > > +#endif
> > > +
> > > +#include <rte_ring_peek_c11_mem.h>
> > > +
> > > +/* Rock that needs to be passed between reserve and commit APIs */
> > > +struct rte_ring_sg_data {
> > > +	/* Pointer to the first space in the ring */
> > > +	void **ptr1;
> > > +	/* Pointer to the second space in the ring if there is wrap-around */
> > > +	void **ptr2;
> > > +	/* Number of elements in the first pointer. If this is equal to
> > > +	 * the number of elements requested, then ptr2 is NULL.
> > > +	 * Otherwise, subtracting n1 from number of elements requested
> > > +	 * will give the number of elements available at ptr2.
> > > +	 */
> > > +	unsigned int n1;
> > > +};
> >
> > I wonder what is the primary goal of that API?
> > The reason I am asking: from what I understand with this patch ZC API will
> > work only for ST and HTS modes (same as peek API).
> > Though, I think it is possible to make it work for any sync model, by changing
> Agree, the functionality can be extended to other modes as well. I added these 2 modes as I found the use cases for these.
> 
> > API a bit: instead of returning sg_data to the user, force him to provide
> > function to read/write elems from/to the ring.
> > Just a schematic one, to illustrate the idea:
> >
> > typedef void (*write_ring_func_t)(void *elem, /*pointer to first elem to
> > update inside the ring*/
> > 				uint32_t num, /* number of elems to update
> > */
> > 				uint32_t esize,
> > 				void *udata  /* caller provide data */);
> >
> > rte_ring_enqueue_zc_bulk_elem(struct rte_ring *r, unsigned int esize,
> > 	unsigned int n, unsigned int *free_space, write_ring_func_t wf, void
> > *udata) {
> > 	struct rte_ring_sg_data sgd;
> > 	.....
> > 	n = move_head_tail(r, ...);
> >
> > 	/* get sgd data based on n */
> > 	get_elem_addr(r, ..., &sgd);
> >
> > 	/* call user defined function to fill reserved elems */
> > 	wf(sgd.p1, sgd.n1, esize, udata);
> > 	if (n != n1)
> > 		wf(sgd.p2, sgd.n2, esize, udata);
> >
> > 	....
> > 	return n;
> > }
> >
> I think the call back function makes it difficult to use the API. The call back function would be a wrapper around another function or API
> which will have its own arguments. Now all those parameters have to passed using the 'udata'. For ex: in the 2nd example that I provided
> earlier, the user has to create a wrapper around 'rte_eth_rx_burst' API and then provide the parameters to 'rte_eth_rx_burst' through
> 'udata'. 'udata' would need a structure definition as well.

Yes, it would, though I don't see much problems with that.
Let say for eth_rx_burst(), user will need something like struct {uint16_t p, q;} udata = {.p = port_id, .q=queue_id,};

> 
> > If we want ZC peek API also - some extra work need to be done with
> > introducing return value for write_ring_func() and checking it properly, but I
> > don't see any big problems here too.
> > That way ZC API can support all sync models, plus we don't need to expose
> > sg_data to the user directly.
> Other modes can be supported with the method used in this patch as well. 

You mean via exposing to the user tail value (in sg_data or so)?
I am still a bit nervous about doing that. 

> If you see a need, I can add them.

Not, really, I just thought callbacks will be a good idea here...

> IMO, only issue with exposing sg_data is ABI compatibility in the future. I think, we can align the 'struct rte_ring_sg_data' to cache line
> boundary and it should provide ability to extend it in the future without affecting the ABI compatibility.

As I understand sg_data is experimental struct (as the rest of API in that file).
So breaking it shouldn't be a problem for a while.

I suppose to summarize things - as I understand you think callback approach
is not a good choice.
From other hand, I am not really happy with idea to expose tail values updates
to the user.
Then I suggest we can just go ahead with that patch as it is:
sg_data approach, _ZC_ peek API only.

> 
> > Also, in future, we probably can de-duplicate the code by making our non-ZC
> > API to use that one internally (pass ring_enqueue_elems()/ob_table as a
> > parameters).
> >
> > > +

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] security: update session create API
  2020-10-10 22:11  2% ` [dpdk-dev] [PATCH v2] " Akhil Goyal
@ 2020-10-13  2:12  0%   ` Lukasz Wojciechowski
  2020-10-14 18:56  2%   ` [dpdk-dev] [PATCH v3] " Akhil Goyal
  1 sibling, 0 replies; 200+ results
From: Lukasz Wojciechowski @ 2020-10-13  2:12 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, mdr, anoobj, hemant.agrawal, konstantin.ananyev,
	declan.doherty, radu.nicolau, david.coyle,
	"'Lukasz Wojciechowski'",

Hi Akhil,

comments inline

W dniu 11.10.2020 o 00:11, Akhil Goyal pisze:
> The API ``rte_security_session_create`` takes only single
> mempool for session and session private data. So the
> application need to create mempool for twice the number of
> sessions needed and will also lead to wastage of memory as
> session private data need more memory compared to session.
> Hence the API is modified to take two mempool pointers
> - one for session and one for private data.
> This is very similar to crypto based session create APIs.
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
>
> Changes in V2:
> incorporated comments from Lukasz and David.
>
>   app/test-crypto-perf/cperf_ops.c       |  4 +-
>   app/test-crypto-perf/main.c            | 12 +++--
>   app/test/test_cryptodev.c              | 18 ++++++--
>   app/test/test_ipsec.c                  |  3 +-
>   app/test/test_security.c               | 61 ++++++++++++++++++++------
>   doc/guides/prog_guide/rte_security.rst |  8 +++-
>   doc/guides/rel_notes/deprecation.rst   |  7 ---
>   doc/guides/rel_notes/release_20_11.rst |  6 +++
>   examples/ipsec-secgw/ipsec-secgw.c     | 12 +----
>   examples/ipsec-secgw/ipsec.c           |  9 ++--
>   lib/librte_security/rte_security.c     |  7 ++-
>   lib/librte_security/rte_security.h     |  4 +-
>   12 files changed, 102 insertions(+), 49 deletions(-)
>
> diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
> index 3da835a9c..3a64a2c34 100644
> --- a/app/test-crypto-perf/cperf_ops.c
> +++ b/app/test-crypto-perf/cperf_ops.c
> @@ -621,7 +621,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>   
>   		/* Create security session */
>   		return (void *)rte_security_session_create(ctx,
> -					&sess_conf, sess_mp);
> +					&sess_conf, sess_mp, priv_mp);
>   	}
>   	if (options->op_type == CPERF_DOCSIS) {
>   		enum rte_security_docsis_direction direction;
> @@ -664,7 +664,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>   
>   		/* Create security session */
>   		return (void *)rte_security_session_create(ctx,
> -					&sess_conf, priv_mp);
> +					&sess_conf, sess_mp, priv_mp);
>   	}
>   #endif
>   	sess = rte_cryptodev_sym_session_create(sess_mp);
> diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
> index 62ae6048b..53864ffdd 100644
> --- a/app/test-crypto-perf/main.c
> +++ b/app/test-crypto-perf/main.c
> @@ -156,7 +156,14 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
>   		if (sess_size > max_sess_size)
>   			max_sess_size = sess_size;
>   	}
> -
> +#ifdef RTE_LIBRTE_SECURITY
> +	for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) {
> +		sess_size = rte_security_session_get_size(
> +				rte_cryptodev_get_sec_ctx(cdev_id));
> +		if (sess_size > max_sess_size)
> +			max_sess_size = sess_size;
> +	}
> +#endif
>   	/*
>   	 * Calculate number of needed queue pairs, based on the amount
>   	 * of available number of logical cores and crypto devices.
> @@ -247,8 +254,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
>   				opts->nb_qps * nb_slaves;
>   #endif
>   		} else
> -			sessions_needed = enabled_cdev_count *
> -						opts->nb_qps * 2;
> +			sessions_needed = enabled_cdev_count * opts->nb_qps;
>   
>   		/*
>   		 * A single session is required per queue pair
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index ac2a36bc2..4bd9d8aff 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -553,9 +553,15 @@ testsuite_setup(void)
>   	unsigned int session_size =
>   		rte_cryptodev_sym_get_private_session_size(dev_id);
>   
> +#ifdef RTE_LIBRTE_SECURITY
> +	unsigned int security_session_size = rte_security_session_get_size(
> +			rte_cryptodev_get_sec_ctx(dev_id));
> +
> +	if (session_size < security_session_size)
> +			session_size = security_session_size;
> +#endif
>   	/*
> -	 * Create mempool with maximum number of sessions * 2,
> -	 * to include the session headers
> +	 * Create mempool with maximum number of sessions.
>   	 */
>   	if (info.sym.max_nb_sessions != 0 &&
>   			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
> @@ -7219,7 +7225,8 @@ test_pdcp_proto(int i, int oop,
>   
>   	/* Create security session */
>   	ut_params->sec_session = rte_security_session_create(ctx,
> -				&sess_conf, ts_params->session_priv_mpool);
> +				&sess_conf, ts_params->session_mpool,
> +				ts_params->session_priv_mpool);
>   
>   	if (!ut_params->sec_session) {
>   		printf("TestCase %s()-%d line %d failed %s: ",
> @@ -7479,7 +7486,8 @@ test_pdcp_proto_SGL(int i, int oop,
>   
>   	/* Create security session */
>   	ut_params->sec_session = rte_security_session_create(ctx,
> -				&sess_conf, ts_params->session_priv_mpool);
> +				&sess_conf, ts_params->session_mpool,
> +				ts_params->session_priv_mpool);
>   
>   	if (!ut_params->sec_session) {
>   		printf("TestCase %s()-%d line %d failed %s: ",
> @@ -7836,6 +7844,7 @@ test_docsis_proto_uplink(int i, struct docsis_test_data *d_td)
>   
>   	/* Create security session */
>   	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
> +					ts_params->session_mpool,
>   					ts_params->session_priv_mpool);
>   
>   	if (!ut_params->sec_session) {
> @@ -8011,6 +8020,7 @@ test_docsis_proto_downlink(int i, struct docsis_test_data *d_td)
>   
>   	/* Create security session */
>   	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
> +					ts_params->session_mpool,
>   					ts_params->session_priv_mpool);
>   
>   	if (!ut_params->sec_session) {
> diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
> index 79d00d7e0..9ad07a179 100644
> --- a/app/test/test_ipsec.c
> +++ b/app/test/test_ipsec.c
> @@ -632,7 +632,8 @@ create_dummy_sec_session(struct ipsec_unitest_params *ut,
>   	static struct rte_security_session_conf conf;
>   
>   	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
> -					&conf, qp->mp_session_private);
> +					&conf, qp->mp_session,
> +					qp->mp_session_private);
>   
>   	if (ut->ss[j].security.ses == NULL)
>   		return -ENOMEM;
> diff --git a/app/test/test_security.c b/app/test/test_security.c
> index 77fd5adc6..bf6a3e9de 100644
> --- a/app/test/test_security.c
> +++ b/app/test/test_security.c
> @@ -237,24 +237,25 @@ static struct mock_session_create_data {
>   	struct rte_security_session_conf *conf;
>   	struct rte_security_session *sess;
>   	struct rte_mempool *mp;
> +	struct rte_mempool *priv_mp;
>   
>   	int ret;
>   
>   	int called;
>   	int failed;
> -} mock_session_create_exp = {NULL, NULL, NULL, NULL, 0, 0, 0};
> +} mock_session_create_exp = {NULL, NULL, NULL, NULL, NULL, 0, 0, 0};
>   
>   static int
>   mock_session_create(void *device,
>   		struct rte_security_session_conf *conf,
>   		struct rte_security_session *sess,
> -		struct rte_mempool *mp)
> +		struct rte_mempool *priv_mp)
>   {
>   	mock_session_create_exp.called++;
>   
>   	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, device);
>   	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, conf);
> -	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, mp);
> +	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
>   
>   	mock_session_create_exp.sess = sess;
>   
> @@ -502,6 +503,7 @@ struct rte_security_ops mock_ops = {
>    */
>   static struct security_testsuite_params {
>   	struct rte_mempool *session_mpool;
> +	struct rte_mempool *session_priv_mpool;
>   } testsuite_params = { NULL };
>   
>   /**
> @@ -524,7 +526,8 @@ static struct security_unittest_params {
>   	.sess = NULL,
>   };
>   
> -#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestsMempoolName"
> +#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestMp"
> +#define SECURITY_TEST_PRIV_MEMPOOL_NAME "SecurityTestPrivMp"
>   #define SECURITY_TEST_MEMPOOL_SIZE 15
>   #define SECURITY_TEST_SESSION_OBJECT_SIZE sizeof(struct rte_security_session)
>   
> @@ -545,6 +548,22 @@ testsuite_setup(void)
>   			SOCKET_ID_ANY, 0);
>   	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
>   			"Cannot create mempool %s\n", rte_strerror(rte_errno));
> +
> +	ts_params->session_priv_mpool = rte_mempool_create(
> +			SECURITY_TEST_PRIV_MEMPOOL_NAME,
> +			SECURITY_TEST_MEMPOOL_SIZE,
> +			rte_security_session_get_size(&unittest_params.ctx),
Call to rte_security_session_get_size() will cause a mockup function 
mock_session_get_size() to be called, which will return 0.
Why do you call this function instead of defining some value for private 
mempool element size?
> +			0, 0, NULL, NULL, NULL, NULL,
> +			SOCKET_ID_ANY, 0);
> +	if (ts_params->session_priv_mpool == NULL) {
> +		printf("TestCase %s() line %d failed (null): "
> +				"Cannot create priv mempool %s\n",
> +				__func__, __LINE__, rte_strerror(rte_errno));
Instead of printf() use RTE_LOG(ERR, EAL,...). All other messages are 
printed this way. It allows control of error messages if required.
> +		rte_mempool_free(ts_params->session_mpool);
> +		ts_params->session_mpool = NULL;
> +		return TEST_FAILED;
> +	}
> +
>   	return TEST_SUCCESS;
>   }
>   
> @@ -559,6 +578,10 @@ testsuite_teardown(void)
>   		rte_mempool_free(ts_params->session_mpool);
>   		ts_params->session_mpool = NULL;
>   	}
> +	if (ts_params->session_priv_mpool) {
> +		rte_mempool_free(ts_params->session_priv_mpool);
> +		ts_params->session_priv_mpool = NULL;
> +	}
>   }
>   
>   /**
> @@ -659,7 +682,8 @@ ut_setup_with_session(void)
>   	mock_session_create_exp.ret = 0;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
>   			sess);
>   	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
> @@ -701,7 +725,8 @@ test_session_create_inv_context(void)
>   	struct rte_security_session *sess;
>   
>   	sess = rte_security_session_create(NULL, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> @@ -725,7 +750,8 @@ test_session_create_inv_context_ops(void)
>   	ut_params->ctx.ops = NULL;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> @@ -749,7 +775,8 @@ test_session_create_inv_context_ops_fun(void)
>   	ut_params->ctx.ops = &empty_ops;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> @@ -770,7 +797,8 @@ test_session_create_inv_configuration(void)
>   	struct rte_security_session *sess;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, NULL,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> @@ -781,7 +809,7 @@ test_session_create_inv_configuration(void)
>   }
>   
>   /**
> - * Test execution of rte_security_session_create with NULL mp parameter
> + * Test execution of rte_security_session_create with NULL mempools
>    */
>   static int
>   test_session_create_inv_mempool(void)
> @@ -790,7 +818,7 @@ test_session_create_inv_mempool(void)
>   	struct rte_security_session *sess;
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			NULL);
> +			NULL, NULL);
It would be best to add a new testcase for verification of passing NULL 
private mempool.
If you pass NULL as the primary mempool as in this testcase, the 
verification of priv mempool (rte_securitry.c:37) won't ever happen 
because rte_security_session_create() will return in line 36.
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> @@ -824,7 +852,8 @@ test_session_create_mempool_empty(void)
>   	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> @@ -853,10 +882,12 @@ test_session_create_ops_failure(void)
>   	mock_session_create_exp.device = NULL;
>   	mock_session_create_exp.conf = &ut_params->conf;
>   	mock_session_create_exp.mp = ts_params->session_mpool;
> +	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
>   	mock_session_create_exp.ret = -1;	/* Return failure status. */
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
>   			sess, NULL, "%p");
>   	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
> @@ -879,10 +910,12 @@ test_session_create_success(void)
>   	mock_session_create_exp.device = NULL;
>   	mock_session_create_exp.conf = &ut_params->conf;
>   	mock_session_create_exp.mp = ts_params->session_mpool;
> +	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
>   	mock_session_create_exp.ret = 0;	/* Return success status. */
>   
>   	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> -			ts_params->session_mpool);
> +			ts_params->session_mpool,
> +			ts_params->session_priv_mpool);
>   	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
>   			sess);
>   	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
> diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
> index 127da2e4f..fdb469d5f 100644
> --- a/doc/guides/prog_guide/rte_security.rst
> +++ b/doc/guides/prog_guide/rte_security.rst
> @@ -533,8 +533,12 @@ and this allows further acceleration of the offload of Crypto workloads.
>   
>   The Security framework provides APIs to create and free sessions for crypto/ethernet
>   devices, where sessions are mempool objects. It is the application's responsibility
> -to create and manage the session mempools. The mempool object size should be able to
> -accommodate the driver's private data of security session.
> +to create and manage two session mempools - one for session and other for session
> +private data. The private session data mempool object size should be able to
> +accommodate the driver's private data of security session. The application can get
> +the size of session private data using API ``rte_security_session_get_size``.
> +And the session mempool object size should be enough to accomodate
> +``rte_security_session``.
>   
>   Once the session mempools have been created, ``rte_security_session_create()``
>   is used to allocate and initialize a session for the required crypto/ethernet device.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 52f413e21..d956a76e7 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -164,13 +164,6 @@ Deprecation Notices
>     following the IPv6 header, as proposed in RFC
>     https://protect2.fireeye.com/v1/url?k=6c464261-31d8d98b-6c47c92e-0cc47a6cba04-7bff9381095a3baf&q=1&e=1d514a47-c7a4-4d29-b9af-8fe8863e27eb&u=https%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2020-August%2F177257.html.
>   
> -* security: The API ``rte_security_session_create`` takes only single mempool
> -  for session and session private data. So the application need to create
> -  mempool for twice the number of sessions needed and will also lead to
> -  wastage of memory as session private data need more memory compared to session.
> -  Hence the API will be modified to take two mempool pointers - one for session
> -  and one for private data.
> -
>   * cryptodev: ``RTE_CRYPTO_AEAD_LIST_END`` from ``enum rte_crypto_aead_algorithm``,
>     ``RTE_CRYPTO_CIPHER_LIST_END`` from ``enum rte_crypto_cipher_algorithm`` and
>     ``RTE_CRYPTO_AUTH_LIST_END`` from ``enum rte_crypto_auth_algorithm``
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index c34ab5493..68b82ae4e 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -307,6 +307,12 @@ API Changes
>     ``rte_fpga_lte_fec_configure`` and structure ``fpga_lte_fec_conf`` to
>     ``rte_fpga_lte_fec_conf``.
>   
> +* security: The API ``rte_security_session_create`` is updated to take two
> +  mempool objects one for session and other for session private data.
> +  So the application need to create two mempools and get the size of session
> +  private data using API ``rte_security_session_get_size`` for private session
> +  mempool.
> +
>   
>   ABI Changes
>   -----------
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> index 60132c4bd..2326089bb 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -2348,12 +2348,8 @@ session_pool_init(struct socket_ctx *ctx, int32_t socket_id, size_t sess_sz)
>   
>   	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
>   			"sess_mp_%u", socket_id);
> -	/*
> -	 * Doubled due to rte_security_session_create() uses one mempool for
> -	 * session and for session private data.
> -	 */
>   	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> -		rte_lcore_count()) * 2;
> +		rte_lcore_count());
>   	sess_mp = rte_cryptodev_sym_session_pool_create(
>   			mp_name, nb_sess, sess_sz, CDEV_MP_CACHE_SZ, 0,
>   			socket_id);
> @@ -2376,12 +2372,8 @@ session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
>   
>   	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
>   			"sess_mp_priv_%u", socket_id);
> -	/*
> -	 * Doubled due to rte_security_session_create() uses one mempool for
> -	 * session and for session private data.
> -	 */
>   	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> -		rte_lcore_count()) * 2;
> +		rte_lcore_count());
>   	sess_mp = rte_mempool_create(mp_name,
>   			nb_sess,
>   			sess_sz,
> diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
> index 01faa7ac7..6baeeb342 100644
> --- a/examples/ipsec-secgw/ipsec.c
> +++ b/examples/ipsec-secgw/ipsec.c
> @@ -117,7 +117,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
>   			set_ipsec_conf(sa, &(sess_conf.ipsec));
>   
>   			ips->security.ses = rte_security_session_create(ctx,
> -					&sess_conf, ipsec_ctx->session_priv_pool);
> +					&sess_conf, ipsec_ctx->session_pool,
> +					ipsec_ctx->session_priv_pool);
>   			if (ips->security.ses == NULL) {
>   				RTE_LOG(ERR, IPSEC,
>   				"SEC Session init failed: err: %d\n", ret);
> @@ -198,7 +199,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
>   		}
>   
>   		ips->security.ses = rte_security_session_create(sec_ctx,
> -				&sess_conf, skt_ctx->session_pool);
> +				&sess_conf, skt_ctx->session_pool,
> +				skt_ctx->session_priv_pool);
>   		if (ips->security.ses == NULL) {
>   			RTE_LOG(ERR, IPSEC,
>   				"SEC Session init failed: err: %d\n", ret);
> @@ -378,7 +380,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
>   		sess_conf.userdata = (void *) sa;
>   
>   		ips->security.ses = rte_security_session_create(sec_ctx,
> -					&sess_conf, skt_ctx->session_pool);
> +					&sess_conf, skt_ctx->session_pool,
> +					skt_ctx->session_priv_pool);
>   		if (ips->security.ses == NULL) {
>   			RTE_LOG(ERR, IPSEC,
>   				"SEC Session init failed: err: %d\n", ret);
> diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
> index 515c29e04..ee4666026 100644
> --- a/lib/librte_security/rte_security.c
> +++ b/lib/librte_security/rte_security.c
> @@ -26,18 +26,21 @@
>   struct rte_security_session *
>   rte_security_session_create(struct rte_security_ctx *instance,
>   			    struct rte_security_session_conf *conf,
> -			    struct rte_mempool *mp)
> +			    struct rte_mempool *mp,
> +			    struct rte_mempool *priv_mp)
>   {
>   	struct rte_security_session *sess = NULL;
>   
>   	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_create, NULL, NULL);
>   	RTE_PTR_OR_ERR_RET(conf, NULL);
>   	RTE_PTR_OR_ERR_RET(mp, NULL);
> +	RTE_PTR_OR_ERR_RET(priv_mp, NULL);
>   
>   	if (rte_mempool_get(mp, (void **)&sess))
>   		return NULL;
>   
> -	if (instance->ops->session_create(instance->device, conf, sess, mp)) {
> +	if (instance->ops->session_create(instance->device, conf,
> +				sess, priv_mp)) {
>   		rte_mempool_put(mp, (void *)sess);
>   		return NULL;
>   	}
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 16839e539..1710cdd6a 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -386,6 +386,7 @@ struct rte_security_session {
>    * @param   instance	security instance
>    * @param   conf	session configuration parameters
>    * @param   mp		mempool to allocate session objects from
> + * @param   priv_mp	mempool to allocate session private data objects from
>    * @return
>    *  - On success, pointer to session
>    *  - On failure, NULL
> @@ -393,7 +394,8 @@ struct rte_security_session {
>   struct rte_security_session *
>   rte_security_session_create(struct rte_security_ctx *instance,
>   			    struct rte_security_session_conf *conf,
> -			    struct rte_mempool *mp);
> +			    struct rte_mempool *mp,
> +			    struct rte_mempool *priv_mp);
>   
>   /**
>    * Update security session as specified by the session configuration

Best regards

Lukasz

-- 
Lukasz Wojciechowski
Principal Software Engineer

Samsung R&D Institute Poland
Samsung Electronics
Office +48 22 377 88 25
l.wojciechow@partner.samsung.com


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth
  2020-10-12  5:24  0%       ` Dharmappa, Savinay
@ 2020-10-12 23:08  0%         ` Dharmappa, Savinay
  2020-10-13 13:56  0%           ` Dharmappa, Savinay
  0 siblings, 1 reply; 200+ results
From: Dharmappa, Savinay @ 2020-10-12 23:08 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Dumitrescu, Cristian, Singh, Jasvinder, dev

09/10/2020 14:39, Savinay Dharmappa:
> DPDK sched library allows runtime configuration of the pipe profiles 
> to the pipes of the subport once scheduler hierarchy is constructed.
> However, to change the subport level bandwidth, existing hierarchy 
> needs to be dismantled and whole process of building hierarchy under 
> subport nodes needs to be repeated which might result in router 
> downtime. Furthermore, due to lack of dynamic configuration of the 
> subport bandwidth profile configuration (shaper and Traffic class 
> rates), the user application is unable to dynamically re-distribute 
> the excess-bandwidth of one subport among other subports in the 
> scheduler hierarchy. Therefore, it is also not possible to adjust the 
> subport bandwidth profile in sync with dynamic changes in pipe 
> profiles of subscribers who want to consume higher bandwidth opportunistically.
> 
> This patch series implements dynamic configuration of the subport 
> bandwidth profile to overcome the runtime situation when group of 
> subscribers are not using the allotted bandwidth and dynamic bandwidth 
> re-distribution is needed the without making any structural changes in the hierarchy.
> 
> The implementation work includes refactoring the existing api and data 
> structures defined for port and subport level, new APIs for adding 
> subport level bandwidth profiles that can be used in runtime.
> 
> ---
> v8 -> v9
>    - updated ABI section in release notes.
>    - Addressed review comments from patch 8
>      of v8.

I was asking a question in my reply to v8 but you didn't hit the "reply" button.
>> sorry for that. All the question raised by you were relevant so I addressed them and sent out v9. 

One more question: why don't you keep the ack given by Cristian in v7?
>> I am carrying ack given Cristian in v9, but It is at the bottom of cover letter.
>>  should I resend the patch  placing ack  just before version changes info?




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC v2 1/1] lib/ring: add scatter gather APIs
  @ 2020-10-12 22:31  4%       ` Honnappa Nagarahalli
  2020-10-13 11:38  0%         ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2020-10-12 22:31 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: olivier.matz, david.marchand, nd, Honnappa Nagarahalli, nd

Hi Konstantin,
	Appreciate your feedback.

<snip>

> 
> 
> > Add scatter gather APIs to avoid intermediate memcpy. Use cases that
> > involve copying large amount of data to/from the ring can benefit from
> > these APIs.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > ---
> >  lib/librte_ring/meson.build        |   3 +-
> >  lib/librte_ring/rte_ring_elem.h    |   1 +
> >  lib/librte_ring/rte_ring_peek_sg.h | 552
> > +++++++++++++++++++++++++++++
> >  3 files changed, 555 insertions(+), 1 deletion(-)  create mode 100644
> > lib/librte_ring/rte_ring_peek_sg.h
> 
> As a generic one - need to update ring UT both func and perf to
> test/measure this new API.
Yes, will add.

> 
> >
> > diff --git a/lib/librte_ring/meson.build b/lib/librte_ring/meson.build
> > index 31c0b4649..377694713 100644
> > --- a/lib/librte_ring/meson.build
> > +++ b/lib/librte_ring/meson.build
> > @@ -12,4 +12,5 @@ headers = files('rte_ring.h',
> >  		'rte_ring_peek.h',
> >  		'rte_ring_peek_c11_mem.h',
> >  		'rte_ring_rts.h',
> > -		'rte_ring_rts_c11_mem.h')
> > +		'rte_ring_rts_c11_mem.h',
> > +		'rte_ring_peek_sg.h')
> > diff --git a/lib/librte_ring/rte_ring_elem.h
> > b/lib/librte_ring/rte_ring_elem.h index 938b398fc..7d3933f15 100644
> > --- a/lib/librte_ring/rte_ring_elem.h
> > +++ b/lib/librte_ring/rte_ring_elem.h
> > @@ -1079,6 +1079,7 @@ rte_ring_dequeue_burst_elem(struct rte_ring *r,
> > void *obj_table,
> >
> >  #ifdef ALLOW_EXPERIMENTAL_API
> >  #include <rte_ring_peek.h>
> > +#include <rte_ring_peek_sg.h>
> >  #endif
> >
> >  #include <rte_ring.h>
> > diff --git a/lib/librte_ring/rte_ring_peek_sg.h
> > b/lib/librte_ring/rte_ring_peek_sg.h
> > new file mode 100644
> > index 000000000..97d5764a6
> > --- /dev/null
> > +++ b/lib/librte_ring/rte_ring_peek_sg.h
> > @@ -0,0 +1,552 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + *
> > + * Copyright (c) 2020 Arm
> > + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org
> > + * All rights reserved.
> > + * Derived from FreeBSD's bufring.h
> > + * Used as BSD-3 Licensed with permission from Kip Macy.
> > + */
> > +
> > +#ifndef _RTE_RING_PEEK_SG_H_
> > +#define _RTE_RING_PEEK_SG_H_
> > +
> > +/**
> > + * @file
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + * It is not recommended to include this file directly.
> > + * Please include <rte_ring_elem.h> instead.
> > + *
> > + * Ring Peek Scatter Gather APIs
> > + * Introduction of rte_ring with scatter gather serialized
> > +producer/consumer
> > + * (HTS sync mode) makes it possible to split public enqueue/dequeue
> > +API
> > + * into 3 phases:
> > + * - enqueue/dequeue start
> > + * - copy data to/from the ring
> > + * - enqueue/dequeue finish
> > + * Along with the advantages of the peek APIs, these APIs provide the
> > +ability
> > + * to avoid copying of the data to temporary area.
> > + *
> > + * Note that right now this new API is available only for two sync modes:
> > + * 1) Single Producer/Single Consumer (RTE_RING_SYNC_ST)
> > + * 2) Serialized Producer/Serialized Consumer (RTE_RING_SYNC_MT_HTS).
> > + * It is a user responsibility to create/init ring with appropriate
> > +sync
> > + * modes selected.
> > + *
> > + * Example usage:
> > + * // read 1 elem from the ring:
> > + * n = rte_ring_enqueue_sg_bulk_start(ring, 32, &sgd, NULL);
> > + * if (n != 0) {
> > + *	//Copy objects in the ring
> > + *	memcpy (sgd->ptr1, obj, sgd->n1 * sizeof(uintptr_t));
> > + *	if (n != sgd->n1)
> > + *		//Second memcpy because of wrapround
> > + *		n2 = n - sgd->n1;
> > + *		memcpy (sgd->ptr2, obj[n2], n2 * sizeof(uintptr_t));
> > + *	rte_ring_dequeue_sg_finish(ring, n);
> 
> It is not clear from the example above why do you need SG(ZC) API.
> Existing peek API would be able to handle such situation (just copy will be
> done internally). Probably better to use examples you provided in your last
> reply to Olivier.
Agree, not a good example, will change it.

> 
> > + * }
> > + *
> > + * Note that between _start_ and _finish_ none other thread can
> > + proceed
> > + * with enqueue(/dequeue) operation till _finish_ completes.
> > + */
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#include <rte_ring_peek_c11_mem.h>
> > +
> > +/* Rock that needs to be passed between reserve and commit APIs */
> > +struct rte_ring_sg_data {
> > +	/* Pointer to the first space in the ring */
> > +	void **ptr1;
> > +	/* Pointer to the second space in the ring if there is wrap-around */
> > +	void **ptr2;
> > +	/* Number of elements in the first pointer. If this is equal to
> > +	 * the number of elements requested, then ptr2 is NULL.
> > +	 * Otherwise, subtracting n1 from number of elements requested
> > +	 * will give the number of elements available at ptr2.
> > +	 */
> > +	unsigned int n1;
> > +};
> 
> I wonder what is the primary goal of that API?
> The reason I am asking: from what I understand with this patch ZC API will
> work only for ST and HTS modes (same as peek API).
> Though, I think it is possible to make it work for any sync model, by changing
Agree, the functionality can be extended to other modes as well. I added these 2 modes as I found the use cases for these.

> API a bit: instead of returning sg_data to the user, force him to provide
> function to read/write elems from/to the ring.
> Just a schematic one, to illustrate the idea:
> 
> typedef void (*write_ring_func_t)(void *elem, /*pointer to first elem to
> update inside the ring*/
> 				uint32_t num, /* number of elems to update
> */
> 				uint32_t esize,
> 				void *udata  /* caller provide data */);
> 
> rte_ring_enqueue_zc_bulk_elem(struct rte_ring *r, unsigned int esize,
> 	unsigned int n, unsigned int *free_space, write_ring_func_t wf, void
> *udata) {
> 	struct rte_ring_sg_data sgd;
> 	.....
> 	n = move_head_tail(r, ...);
> 
> 	/* get sgd data based on n */
> 	get_elem_addr(r, ..., &sgd);
> 
> 	/* call user defined function to fill reserved elems */
> 	wf(sgd.p1, sgd.n1, esize, udata);
> 	if (n != n1)
> 		wf(sgd.p2, sgd.n2, esize, udata);
> 
> 	....
> 	return n;
> }
> 
I think the call back function makes it difficult to use the API. The call back function would be a wrapper around another function or API which will have its own arguments. Now all those parameters have to passed using the 'udata'. For ex: in the 2nd example that I provided earlier, the user has to create a wrapper around 'rte_eth_rx_burst' API and then provide the parameters to 'rte_eth_rx_burst' through 'udata'. 'udata' would need a structure definition as well.

> If we want ZC peek API also - some extra work need to be done with
> introducing return value for write_ring_func() and checking it properly, but I
> don't see any big problems here too.
> That way ZC API can support all sync models, plus we don't need to expose
> sg_data to the user directly.
Other modes can be supported with the method used in this patch as well. If you see a need, I can add them.
IMO, only issue with exposing sg_data is ABI compatibility in the future. I think, we can align the 'struct rte_ring_sg_data' to cache line boundary and it should provide ability to extend it in the future without affecting the ABI compatibility.

> Also, in future, we probably can de-duplicate the code by making our non-ZC
> API to use that one internally (pass ring_enqueue_elems()/ob_table as a
> parameters).
> 
> > +
> > +static __rte_always_inline void
> > +__rte_ring_get_elem_addr_64(struct rte_ring *r, uint32_t head,
> > +	uint32_t num, void **dst1, uint32_t *n1, void **dst2) {
> > +	uint32_t idx = head & r->mask;
> > +	uint64_t *ring = (uint64_t *)&r[1];
> > +
> > +	*dst1 = ring + idx;
> > +	*n1 = num;
> > +
> > +	if (idx + num > r->size) {
> > +		*n1 = num - (r->size - idx - 1);
> > +		*dst2 = ring;
> > +	}
> > +}
> > +
> > +static __rte_always_inline void
> > +__rte_ring_get_elem_addr_128(struct rte_ring *r, uint32_t head,
> > +	uint32_t num, void **dst1, uint32_t *n1, void **dst2) {
> > +	uint32_t idx = head & r->mask;
> > +	rte_int128_t *ring = (rte_int128_t *)&r[1];
> > +
> > +	*dst1 = ring + idx;
> > +	*n1 = num;
> > +
> > +	if (idx + num > r->size) {
> > +		*n1 = num - (r->size - idx - 1);
> > +		*dst2 = ring;
> > +	}
> > +}
> > +
> > +static __rte_always_inline void
> > +__rte_ring_get_elem_addr(struct rte_ring *r, uint32_t head,
> > +	uint32_t esize, uint32_t num, void **dst1, uint32_t *n1, void
> > +**dst2) {
> > +	if (esize == 8)
> > +		__rte_ring_get_elem_addr_64(r, head,
> > +						num, dst1, n1, dst2);
> > +	else if (esize == 16)
> > +		__rte_ring_get_elem_addr_128(r, head,
> > +						num, dst1, n1, dst2);
> 
> 
> I don't think we need that special handling for 8/16B sizes.
> In all functions esize is an input parameter.
> If user will specify is as a constant - compiler will be able to convert multiply
> to shift and add ops.
Ok, I will check this out.

> 
> > +	else {
> > +		uint32_t idx, scale, nr_idx;
> > +		uint32_t *ring = (uint32_t *)&r[1];
> > +
> > +		/* Normalize to uint32_t */
> > +		scale = esize / sizeof(uint32_t);
> > +		idx = head & r->mask;
> > +		nr_idx = idx * scale;
> > +
> > +		*dst1 = ring + nr_idx;
> > +		*n1 = num;
> > +
> > +		if (idx + num > r->size) {
> > +			*n1 = num - (r->size - idx - 1);
> > +			*dst2 = ring;
> > +		}
> > +	}
> > +}
> > +
> > +/**
> > + * @internal This function moves prod head value.
> > + */
> > +static __rte_always_inline unsigned int
> > +__rte_ring_do_enqueue_sg_elem_start(struct rte_ring *r, unsigned int
> esize,
> > +		uint32_t n, enum rte_ring_queue_behavior behavior,
> > +		struct rte_ring_sg_data *sgd, unsigned int *free_space) {
> > +	uint32_t free, head, next;
> > +
> > +	switch (r->prod.sync_type) {
> > +	case RTE_RING_SYNC_ST:
> > +		n = __rte_ring_move_prod_head(r, RTE_RING_SYNC_ST, n,
> > +			behavior, &head, &next, &free);
> > +		__rte_ring_get_elem_addr(r, head, esize, n, (void **)&sgd-
> >ptr1,
> > +			&sgd->n1, (void **)&sgd->ptr2);
> > +		break;
> > +	case RTE_RING_SYNC_MT_HTS:
> > +		n = __rte_ring_hts_move_prod_head(r, n, behavior, &head,
> &free);
> > +		__rte_ring_get_elem_addr(r, head, esize, n, (void **)&sgd-
> >ptr1,
> > +			&sgd->n1, (void **)&sgd->ptr2);
> > +		break;
> > +	case RTE_RING_SYNC_MT:
> > +	case RTE_RING_SYNC_MT_RTS:
> > +	default:
> > +		/* unsupported mode, shouldn't be here */
> > +		RTE_ASSERT(0);
> > +		n = 0;
> > +		free = 0;
> > +	}
> > +
> > +	if (free_space != NULL)
> > +		*free_space = free - n;
> > +	return n;
> > +}
> > +
> > +/**
> > + * Start to enqueue several objects on the ring.
> > + * Note that no actual objects are put in the queue by this function,
> > + * it just reserves space for the user on the ring.
> > + * User has to copy objects into the queue using the returned pointers.
> > + * User should call rte_ring_enqueue_sg_bulk_elem_finish to complete
> > +the
> > + * enqueue operation.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param esize
> > + *   The size of ring element, in bytes. It must be a multiple of 4.
> > + * @param n
> > + *   The number of objects to add in the ring.
> > + * @param sgd
> > + *   The scatter-gather data containing pointers for copying data.
> > + * @param free_space
> > + *   if non-NULL, returns the amount of space in the ring after the
> > + *   reservation operation has finished.
> > + * @return
> > + *   The number of objects that can be enqueued, either 0 or n
> > + */
> > +__rte_experimental
> > +static __rte_always_inline unsigned int
> > +rte_ring_enqueue_sg_bulk_elem_start(struct rte_ring *r, unsigned int
> esize,
> > +	unsigned int n, struct rte_ring_sg_data *sgd, unsigned int
> > +*free_space) {
> > +	return __rte_ring_do_enqueue_sg_elem_start(r, esize, n,
> > +			RTE_RING_QUEUE_FIXED, sgd, free_space); }
> > +
> > +/**
> > + * Start to enqueue several pointers to objects on the ring.
> > + * Note that no actual pointers are put in the queue by this
> > +function,
> > + * it just reserves space for the user on the ring.
> > + * User has to copy pointers to objects into the queue using the
> > + * returned pointers.
> > + * User should call rte_ring_enqueue_sg_bulk_finish to complete the
> > + * enqueue operation.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param n
> > + *   The number of objects to add in the ring.
> > + * @param sgd
> > + *   The scatter-gather data containing pointers for copying data.
> > + * @param free_space
> > + *   if non-NULL, returns the amount of space in the ring after the
> > + *   reservation operation has finished.
> > + * @return
> > + *   The number of objects that can be enqueued, either 0 or n
> > + */
> > +__rte_experimental
> > +static __rte_always_inline unsigned int
> > +rte_ring_enqueue_sg_bulk_start(struct rte_ring *r, unsigned int n,
> > +	struct rte_ring_sg_data *sgd, unsigned int *free_space) {
> > +	return rte_ring_enqueue_sg_bulk_elem_start(r, sizeof(uintptr_t), n,
> > +							sgd, free_space);
> > +}
> > +/**
> > + * Start to enqueue several objects on the ring.
> > + * Note that no actual objects are put in the queue by this function,
> > + * it just reserves space for the user on the ring.
> > + * User has to copy objects into the queue using the returned pointers.
> > + * User should call rte_ring_enqueue_sg_bulk_elem_finish to complete
> > +the
> > + * enqueue operation.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param esize
> > + *   The size of ring element, in bytes. It must be a multiple of 4.
> > + * @param n
> > + *   The number of objects to add in the ring.
> > + * @param sgd
> > + *   The scatter-gather data containing pointers for copying data.
> > + * @param free_space
> > + *   if non-NULL, returns the amount of space in the ring after the
> > + *   reservation operation has finished.
> > + * @return
> > + *   The number of objects that can be enqueued, either 0 or n
> > + */
> > +__rte_experimental
> > +static __rte_always_inline unsigned int
> > +rte_ring_enqueue_sg_burst_elem_start(struct rte_ring *r, unsigned int
> esize,
> > +	unsigned int n, struct rte_ring_sg_data *sgd, unsigned int
> > +*free_space) {
> > +	return __rte_ring_do_enqueue_sg_elem_start(r, esize, n,
> > +			RTE_RING_QUEUE_VARIABLE, sgd, free_space); }
> > +
> > +/**
> > + * Start to enqueue several pointers to objects on the ring.
> > + * Note that no actual pointers are put in the queue by this
> > +function,
> > + * it just reserves space for the user on the ring.
> > + * User has to copy pointers to objects into the queue using the
> > + * returned pointers.
> > + * User should call rte_ring_enqueue_sg_bulk_finish to complete the
> > + * enqueue operation.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param n
> > + *   The number of objects to add in the ring.
> > + * @param sgd
> > + *   The scatter-gather data containing pointers for copying data.
> > + * @param free_space
> > + *   if non-NULL, returns the amount of space in the ring after the
> > + *   reservation operation has finished.
> > + * @return
> > + *   The number of objects that can be enqueued, either 0 or n
> > + */
> > +__rte_experimental
> > +static __rte_always_inline unsigned int
> > +rte_ring_enqueue_sg_burst_start(struct rte_ring *r, unsigned int n,
> > +	struct rte_ring_sg_data *sgd, unsigned int *free_space) {
> > +	return rte_ring_enqueue_sg_burst_elem_start(r, sizeof(uintptr_t),
> n,
> > +							sgd, free_space);
> > +}
> > +
> > +/**
> > + * Complete enqueuing several objects on the ring.
> > + * Note that number of objects to enqueue should not exceed previous
> > + * enqueue_start return value.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param n
> > + *   The number of objects to add to the ring.
> > + */
> > +__rte_experimental
> > +static __rte_always_inline void
> > +rte_ring_enqueue_sg_elem_finish(struct rte_ring *r, unsigned int n) {
> > +	uint32_t tail;
> > +
> > +	switch (r->prod.sync_type) {
> > +	case RTE_RING_SYNC_ST:
> > +		n = __rte_ring_st_get_tail(&r->prod, &tail, n);
> > +		__rte_ring_st_set_head_tail(&r->prod, tail, n, 1);
> > +		break;
> > +	case RTE_RING_SYNC_MT_HTS:
> > +		n = __rte_ring_hts_get_tail(&r->hts_prod, &tail, n);
> > +		__rte_ring_hts_set_head_tail(&r->hts_prod, tail, n, 1);
> > +		break;
> > +	case RTE_RING_SYNC_MT:
> > +	case RTE_RING_SYNC_MT_RTS:
> > +	default:
> > +		/* unsupported mode, shouldn't be here */
> > +		RTE_ASSERT(0);
> > +	}
> > +}
> > +
> > +/**
> > + * Complete enqueuing several pointers to objects on the ring.
> > + * Note that number of objects to enqueue should not exceed previous
> > + * enqueue_start return value.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param n
> > + *   The number of pointers to objects to add to the ring.
> > + */
> > +__rte_experimental
> > +static __rte_always_inline void
> > +rte_ring_enqueue_sg_finish(struct rte_ring *r, unsigned int n) {
> > +	rte_ring_enqueue_sg_elem_finish(r, n); }
> > +
> > +/**
> > + * @internal This function moves cons head value and copies up to *n*
> > + * objects from the ring to the user provided obj_table.
> > + */
> > +static __rte_always_inline unsigned int
> > +__rte_ring_do_dequeue_sg_elem_start(struct rte_ring *r,
> > +	uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior,
> > +	struct rte_ring_sg_data *sgd, unsigned int *available) {
> > +	uint32_t avail, head, next;
> > +
> > +	switch (r->cons.sync_type) {
> > +	case RTE_RING_SYNC_ST:
> > +		n = __rte_ring_move_cons_head(r, RTE_RING_SYNC_ST, n,
> > +			behavior, &head, &next, &avail);
> > +		__rte_ring_get_elem_addr(r, head, esize, n,
> > +					sgd->ptr1, &sgd->n1, sgd->ptr2);
> > +		break;
> > +	case RTE_RING_SYNC_MT_HTS:
> > +		n = __rte_ring_hts_move_cons_head(r, n, behavior,
> > +			&head, &avail);
> > +		__rte_ring_get_elem_addr(r, head, esize, n,
> > +					sgd->ptr1, &sgd->n1, sgd->ptr2);
> > +		break;
> > +	case RTE_RING_SYNC_MT:
> > +	case RTE_RING_SYNC_MT_RTS:
> > +	default:
> > +		/* unsupported mode, shouldn't be here */
> > +		RTE_ASSERT(0);
> > +		n = 0;
> > +		avail = 0;
> > +	}
> > +
> > +	if (available != NULL)
> > +		*available = avail - n;
> > +	return n;
> > +}
> > +
> > +/**
> > + * Start to dequeue several objects from the ring.
> > + * Note that no actual objects are copied from the queue by this function.
> > + * User has to copy objects from the queue using the returned pointers.
> > + * User should call rte_ring_dequeue_sg_bulk_elem_finish to complete
> > +the
> > + * dequeue operation.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param esize
> > + *   The size of ring element, in bytes. It must be a multiple of 4.
> > + * @param n
> > + *   The number of objects to remove from the ring.
> > + * @param sgd
> > + *   The scatter-gather data containing pointers for copying data.
> > + * @param available
> > + *   If non-NULL, returns the number of remaining ring entries after the
> > + *   dequeue has finished.
> > + * @return
> > + *   The number of objects that can be dequeued, either 0 or n
> > + */
> > +__rte_experimental
> > +static __rte_always_inline unsigned int
> > +rte_ring_dequeue_sg_bulk_elem_start(struct rte_ring *r, unsigned int
> esize,
> > +	unsigned int n, struct rte_ring_sg_data *sgd, unsigned int
> > +*available) {
> > +	return __rte_ring_do_dequeue_sg_elem_start(r, esize, n,
> > +			RTE_RING_QUEUE_FIXED, sgd, available); }
> > +
> > +/**
> > + * Start to dequeue several pointers to objects from the ring.
> > + * Note that no actual pointers are removed from the queue by this
> function.
> > + * User has to copy pointers to objects from the queue using the
> > + * returned pointers.
> > + * User should call rte_ring_dequeue_sg_bulk_finish to complete the
> > + * dequeue operation.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param n
> > + *   The number of objects to remove from the ring.
> > + * @param sgd
> > + *   The scatter-gather data containing pointers for copying data.
> > + * @param available
> > + *   If non-NULL, returns the number of remaining ring entries after the
> > + *   dequeue has finished.
> > + * @return
> > + *   The number of objects that can be dequeued, either 0 or n
> > + */
> > +__rte_experimental
> > +static __rte_always_inline unsigned int
> > +rte_ring_dequeue_sg_bulk_start(struct rte_ring *r, unsigned int n,
> > +	struct rte_ring_sg_data *sgd, unsigned int *available) {
> > +	return rte_ring_dequeue_sg_bulk_elem_start(r, sizeof(uintptr_t),
> > +		n, sgd, available);
> > +}
> > +
> > +/**
> > + * Start to dequeue several objects from the ring.
> > + * Note that no actual objects are copied from the queue by this function.
> > + * User has to copy objects from the queue using the returned pointers.
> > + * User should call rte_ring_dequeue_sg_burst_elem_finish to complete
> > +the
> > + * dequeue operation.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param esize
> > + *   The size of ring element, in bytes. It must be a multiple of 4.
> > + *   This must be the same value used while creating the ring. Otherwise
> > + *   the results are undefined.
> > + * @param n
> > + *   The number of objects to dequeue from the ring.
> > + * @param sgd
> > + *   The scatter-gather data containing pointers for copying data.
> > + * @param available
> > + *   If non-NULL, returns the number of remaining ring entries after the
> > + *   dequeue has finished.
> > + * @return
> > + *   The number of objects that can be dequeued, either 0 or n
> > + */
> > +__rte_experimental
> > +static __rte_always_inline unsigned int
> > +rte_ring_dequeue_sg_burst_elem_start(struct rte_ring *r, unsigned int
> esize,
> > +	unsigned int n, struct rte_ring_sg_data *sgd, unsigned int
> > +*available) {
> > +	return __rte_ring_do_dequeue_sg_elem_start(r, esize, n,
> > +			RTE_RING_QUEUE_VARIABLE, sgd, available); }
> > +
> > +/**
> > + * Start to dequeue several pointers to objects from the ring.
> > + * Note that no actual pointers are removed from the queue by this
> function.
> > + * User has to copy pointers to objects from the queue using the
> > + * returned pointers.
> > + * User should call rte_ring_dequeue_sg_burst_finish to complete the
> > + * dequeue operation.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param n
> > + *   The number of objects to remove from the ring.
> > + * @param sgd
> > + *   The scatter-gather data containing pointers for copying data.
> > + * @param available
> > + *   If non-NULL, returns the number of remaining ring entries after the
> > + *   dequeue has finished.
> > + * @return
> > + *   The number of objects that can be dequeued, either 0 or n
> > + */
> > +__rte_experimental
> > +static __rte_always_inline unsigned int
> > +rte_ring_dequeue_sg_burst_start(struct rte_ring *r, unsigned int n,
> > +		struct rte_ring_sg_data *sgd, unsigned int *available) {
> > +	return rte_ring_dequeue_sg_burst_elem_start(r, sizeof(uintptr_t),
> n,
> > +			sgd, available);
> > +}
> > +
> > +/**
> > + * Complete dequeuing several objects from the ring.
> > + * Note that number of objects to dequeued should not exceed previous
> > + * dequeue_start return value.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param n
> > + *   The number of objects to remove from the ring.
> > + */
> > +__rte_experimental
> > +static __rte_always_inline void
> > +rte_ring_dequeue_sg_elem_finish(struct rte_ring *r, unsigned int n) {
> > +	uint32_t tail;
> > +
> > +	switch (r->cons.sync_type) {
> > +	case RTE_RING_SYNC_ST:
> > +		n = __rte_ring_st_get_tail(&r->cons, &tail, n);
> > +		__rte_ring_st_set_head_tail(&r->cons, tail, n, 0);
> > +		break;
> > +	case RTE_RING_SYNC_MT_HTS:
> > +		n = __rte_ring_hts_get_tail(&r->hts_cons, &tail, n);
> > +		__rte_ring_hts_set_head_tail(&r->hts_cons, tail, n, 0);
> > +		break;
> > +	case RTE_RING_SYNC_MT:
> > +	case RTE_RING_SYNC_MT_RTS:
> > +	default:
> > +		/* unsupported mode, shouldn't be here */
> > +		RTE_ASSERT(0);
> > +	}
> > +}
> > +
> > +/**
> > + * Complete dequeuing several objects from the ring.
> > + * Note that number of objects to dequeued should not exceed previous
> > + * dequeue_start return value.
> > + *
> > + * @param r
> > + *   A pointer to the ring structure.
> > + * @param n
> > + *   The number of objects to remove from the ring.
> > + */
> > +__rte_experimental
> > +static __rte_always_inline void
> > +rte_ring_dequeue_sg_finish(struct rte_ring *r, unsigned int n) {
> > +	rte_ring_dequeue_elem_finish(r, n);
> > +}
> > +
> > +#ifdef __cplusplus
> > +}
> > +#endif
> > +
> > +#endif /* _RTE_RING_PEEK_SG_H_ */
> > --
> > 2.17.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v5 10/11] doc: update release notes for MLX5 L3 frag support
  2020-10-12 10:43  8%         ` [dpdk-dev] [PATCH v5 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
@ 2020-10-12 19:29  0%           ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-10-12 19:29 UTC (permalink / raw)
  To: Dekel Peled
  Cc: orika, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo, dev

12/10/2020 12:43, Dekel Peled:
> This patch updates 20.11 release notes with the changes included in
> patches of this series:
> 1) MLX5 support of matching on IPv4/IPv6 fragmented/non-fragmented
>    packets.
> 2) ABI change in ethdev struct rte_flow_item_ipv6.
> 
> Signed-off-by: Dekel Peled <dekelp@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>

Please merge the release notes changes with the code changes
in the appropriate patches.



^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4 1/2] cryptodev: remove crypto list end enumerators
  @ 2020-10-12 19:21  7% ` Arek Kusztal
  0 siblings, 0 replies; 200+ results
From: Arek Kusztal @ 2020-10-12 19:21 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, fiona.trahe, Arek Kusztal

This patch removes enumerators RTE_CRYPTO_CIPHER_LIST_END,
RTE_CRYPTO_AUTH_LIST_END, RTE_CRYPTO_AEAD_LIST_END to prevent
some problems that may arise when adding new crypto algorithms.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
---
 lib/librte_cryptodev/rte_crypto_sym.h | 36 ++++++++++++++++++---------
 1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..84170e24e 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -87,7 +87,13 @@ union rte_crypto_sym_ofs {
 	} ofs;
 };
 
-/** Symmetric Cipher Algorithms */
+/** Symmetric Cipher Algorithms
+ *
+ * Note, to avoid ABI breakage across releases
+ * - LIST_END should not be added to this enum
+ * - the order of enums should not be changed
+ * - new algorithms should only be added to the end
+ */
 enum rte_crypto_cipher_algorithm {
 	RTE_CRYPTO_CIPHER_NULL = 1,
 	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
@@ -132,15 +138,12 @@ enum rte_crypto_cipher_algorithm {
 	 * for m_src and m_dst in the rte_crypto_sym_op must be NULL.
 	 */
 
-	RTE_CRYPTO_CIPHER_DES_DOCSISBPI,
+	RTE_CRYPTO_CIPHER_DES_DOCSISBPI
 	/**< DES algorithm using modes required by
 	 * DOCSIS Baseline Privacy Plus Spec.
 	 * Chained mbufs are not supported in this mode, i.e. rte_mbuf.next
 	 * for m_src and m_dst in the rte_crypto_sym_op must be NULL.
 	 */
-
-	RTE_CRYPTO_CIPHER_LIST_END
-
 };
 
 /** Cipher algorithm name strings */
@@ -246,7 +249,13 @@ struct rte_crypto_cipher_xform {
 	} iv;	/**< Initialisation vector parameters */
 };
 
-/** Symmetric Authentication / Hash Algorithms */
+/** Symmetric Authentication / Hash Algorithms
+ *
+ * Note, to avoid ABI breakage across releases
+ * - LIST_END should not be added to this enum
+ * - the order of enums should not be changed
+ * - new algorithms should only be added to the end
+ */
 enum rte_crypto_auth_algorithm {
 	RTE_CRYPTO_AUTH_NULL = 1,
 	/**< NULL hash algorithm. */
@@ -312,10 +321,8 @@ enum rte_crypto_auth_algorithm {
 	/**< HMAC using 384 bit SHA3 algorithm. */
 	RTE_CRYPTO_AUTH_SHA3_512,
 	/**< 512 bit SHA3 algorithm. */
-	RTE_CRYPTO_AUTH_SHA3_512_HMAC,
+	RTE_CRYPTO_AUTH_SHA3_512_HMAC
 	/**< HMAC using 512 bit SHA3 algorithm. */
-
-	RTE_CRYPTO_AUTH_LIST_END
 };
 
 /** Authentication algorithm name strings */
@@ -406,15 +413,20 @@ struct rte_crypto_auth_xform {
 };
 
 
-/** Symmetric AEAD Algorithms */
+/** Symmetric AEAD Algorithms
+ *
+ * Note, to avoid ABI breakage across releases
+ * - LIST_END should not be added to this enum
+ * - the order of enums should not be changed
+ * - new algorithms should only be added to the end
+ */
 enum rte_crypto_aead_algorithm {
 	RTE_CRYPTO_AEAD_AES_CCM = 1,
 	/**< AES algorithm in CCM mode. */
 	RTE_CRYPTO_AEAD_AES_GCM,
 	/**< AES algorithm in GCM mode. */
-	RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
+	RTE_CRYPTO_AEAD_CHACHA20_POLY1305
 	/**< Chacha20 cipher with poly1305 authenticator */
-	RTE_CRYPTO_AEAD_LIST_END
 };
 
 /** AEAD algorithm name strings */
-- 
2.17.1


^ permalink raw reply	[relevance 7%]

* Re: [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI
  2020-10-06  8:26  4%   ` Van Haaren, Harry
@ 2020-10-12 19:09  4%     ` Pavan Nikhilesh Bhagavatula
  2020-10-13 19:20  4%       ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2020-10-12 19:09 UTC (permalink / raw)
  To: Van Haaren, Harry, McDaniel, Timothy, Jerin Jacob Kollanukkaran,
	Kovacevic, Marko, Ori Kam, Richardson, Bruce, Nicolau, Radu,
	Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
  Cc: dev, Carrillo, Erik G, Eads, Gage, hemant.agrawal

>> Subject: [PATCH v2 2/2] eventdev: update app and examples for new
>eventdev ABI
>>
>> Several data structures and constants changed, or were added,
>> in the previous patch.  This commit updates the dependent
>> apps and examples to use the new ABI.
>>
>> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

With fixes to trace framework
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

>> ---	
>
>With this patch applied, the compilation works fine, however runtime
>fails.
>Note that there is a dependency to the following fix which Timothy
>upstreamed:
>https://urldefense.proofpoint.com/v2/url?u=http-
>3A__patches.dpdk.org_patch_79713_&d=DwIFAg&c=nKjWec2b6R0mO
>yPaz7xtfQ&r=E3SgYMjtKCMVsB-fmvgGV3o-
>g_fjLhk5Pupi9ijohpc&m=mRL8gzBSQmAPRQfgNtiP_pu9ptgafRSt-
>dqHP8c6Q_A&s=84_EoTViUQXiFuIh4-
>YkcqBHP_PYsL73VmZzbczCTOI&e=
>
>The above linked patch increases the CTF trace size, and fixes the
>following error:
>./dpdk-test
>EAL: __rte_trace_point_emit_field():442 CTF field is too long
>EAL: __rte_trace_point_register():468 missing rte_trace_emit_header()
>in register fn
>
>
>>  app/test-eventdev/evt_common.h                     | 11 ++++++++
>>  app/test-eventdev/test_order_atq.c                 | 28 +++++++++++++++-
>-----
>>  app/test-eventdev/test_order_common.c              |  1 +
>>  app/test-eventdev/test_order_queue.c               | 29
>++++++++++++++++------
>>  app/test/test_eventdev.c                           |  4 +--
>>  .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +++--
>>  examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
>>  examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++++--
>>  examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +++--
>>  examples/l3fwd/l3fwd_event_generic.c               |  7 ++++--
>>  examples/l3fwd/l3fwd_event_internal_port.c         |  6 +++--
>>  11 files changed, 80 insertions(+), 26 deletions(-)
>>
>> diff --git a/app/test-eventdev/evt_common.h b/app/test-
>eventdev/evt_common.h
>> index f9d7378..a1da1cf 100644
>> --- a/app/test-eventdev/evt_common.h
>> +++ b/app/test-eventdev/evt_common.h
>> @@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
>>  			true : false;
>>  }
>>
>> +static inline bool
>> +evt_has_flow_id(uint8_t dev_id)
>> +{
>> +	struct rte_event_dev_info dev_info;
>> +
>> +	rte_event_dev_info_get(dev_id, &dev_info);
>> +	return (dev_info.event_dev_cap &
>> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
>> +			true : false;
>> +}
>> +
>>  static inline int
>>  evt_service_setup(uint32_t service_id)
>>  {
>> @@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options
>*opt, uint8_t
>> nb_queues,
>>  			.dequeue_timeout_ns = opt->deq_tmo_nsec,
>>  			.nb_event_queues = nb_queues,
>>  			.nb_event_ports = nb_ports,
>> +			.nb_single_link_event_port_queues = 0,
>>  			.nb_events_limit  = info.max_num_events,
>>  			.nb_event_queue_flows = opt->nb_flows,
>>  			.nb_event_port_dequeue_depth =
>> diff --git a/app/test-eventdev/test_order_atq.c b/app/test-
>> eventdev/test_order_atq.c
>> index 3366cfc..cfcb1dc 100644
>> --- a/app/test-eventdev/test_order_atq.c
>> +++ b/app/test-eventdev/test_order_atq.c
>> @@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event
>*const ev)
>>  }
>>
>>  static int
>> -order_atq_worker(void *arg)
>> +order_atq_worker(void *arg, const bool flow_id_cap)
>>  {
>>  	ORDER_WORKER_INIT;
>>  	struct rte_event ev;
>> @@ -34,6 +34,9 @@ order_atq_worker(void *arg)
>>  			continue;
>>  		}
>>
>> +		if (!flow_id_cap)
>> +			ev.flow_id = ev.mbuf->udata64;
>> +
>>  		if (ev.sub_event_type == 0) { /* stage 0 from producer
>*/
>>  			order_atq_process_stage_0(&ev);
>>  			while (rte_event_enqueue_burst(dev_id, port,
>&ev, 1)
>> @@ -50,7 +53,7 @@ order_atq_worker(void *arg)
>>  }
>>
>>  static int
>> -order_atq_worker_burst(void *arg)
>> +order_atq_worker_burst(void *arg, const bool flow_id_cap)
>>  {
>>  	ORDER_WORKER_INIT;
>>  	struct rte_event ev[BURST_SIZE];
>> @@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
>>  		}
>>
>>  		for (i = 0; i < nb_rx; i++) {
>> +			if (!flow_id_cap)
>> +				ev[i].flow_id = ev[i].mbuf->udata64;
>> +
>>  			if (ev[i].sub_event_type == 0) { /*stage 0 */
>>  				order_atq_process_stage_0(&ev[i]);
>>  			} else if (ev[i].sub_event_type == 1) { /* stage 1
>*/
>> @@ -95,11 +101,19 @@ worker_wrapper(void *arg)
>>  {
>>  	struct worker_data *w  = arg;
>>  	const bool burst = evt_has_burst_mode(w->dev_id);
>> -
>> -	if (burst)
>> -		return order_atq_worker_burst(arg);
>> -	else
>> -		return order_atq_worker(arg);
>> +	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
>> +
>> +	if (burst) {
>> +		if (flow_id_cap)
>> +			return order_atq_worker_burst(arg, true);
>> +		else
>> +			return order_atq_worker_burst(arg, false);
>> +	} else {
>> +		if (flow_id_cap)
>> +			return order_atq_worker(arg, true);
>> +		else
>> +			return order_atq_worker(arg, false);
>> +	}
>>  }
>>
>>  static int
>> diff --git a/app/test-eventdev/test_order_common.c b/app/test-
>> eventdev/test_order_common.c
>> index 4190f9a..7942390 100644
>> --- a/app/test-eventdev/test_order_common.c
>> +++ b/app/test-eventdev/test_order_common.c
>> @@ -49,6 +49,7 @@ order_producer(void *arg)
>>  		const uint32_t flow = (uintptr_t)m % nb_flows;
>>  		/* Maintain seq number per flow */
>>  		m->seqn = producer_flow_seq[flow]++;
>> +		m->udata64 = flow;
>>
>>  		ev.flow_id = flow;
>>  		ev.mbuf = m;
>> diff --git a/app/test-eventdev/test_order_queue.c b/app/test-
>> eventdev/test_order_queue.c
>> index 495efd9..1511c00 100644
>> --- a/app/test-eventdev/test_order_queue.c
>> +++ b/app/test-eventdev/test_order_queue.c
>> @@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event
>*const ev)
>>  }
>>
>>  static int
>> -order_queue_worker(void *arg)
>> +order_queue_worker(void *arg, const bool flow_id_cap)
>>  {
>>  	ORDER_WORKER_INIT;
>>  	struct rte_event ev;
>> @@ -34,6 +34,9 @@ order_queue_worker(void *arg)
>>  			continue;
>>  		}
>>
>> +		if (!flow_id_cap)
>> +			ev.flow_id = ev.mbuf->udata64;
>> +
>>  		if (ev.queue_id == 0) { /* from ordered queue */
>>  			order_queue_process_stage_0(&ev);
>>  			while (rte_event_enqueue_burst(dev_id, port,
>&ev, 1)
>> @@ -50,7 +53,7 @@ order_queue_worker(void *arg)
>>  }
>>
>>  static int
>> -order_queue_worker_burst(void *arg)
>> +order_queue_worker_burst(void *arg, const bool flow_id_cap)
>>  {
>>  	ORDER_WORKER_INIT;
>>  	struct rte_event ev[BURST_SIZE];
>> @@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
>>  		}
>>
>>  		for (i = 0; i < nb_rx; i++) {
>> +
>> +			if (!flow_id_cap)
>> +				ev[i].flow_id = ev[i].mbuf->udata64;
>> +
>>  			if (ev[i].queue_id == 0) { /* from ordered queue
>*/
>>  				order_queue_process_stage_0(&ev[i]);
>>  			} else if (ev[i].queue_id == 1) {/* from atomic
>queue */
>> @@ -95,11 +102,19 @@ worker_wrapper(void *arg)
>>  {
>>  	struct worker_data *w  = arg;
>>  	const bool burst = evt_has_burst_mode(w->dev_id);
>> -
>> -	if (burst)
>> -		return order_queue_worker_burst(arg);
>> -	else
>> -		return order_queue_worker(arg);
>> +	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
>> +
>> +	if (burst) {
>> +		if (flow_id_cap)
>> +			return order_queue_worker_burst(arg, true);
>> +		else
>> +			return order_queue_worker_burst(arg, false);
>> +	} else {
>> +		if (flow_id_cap)
>> +			return order_queue_worker(arg, true);
>> +		else
>> +			return order_queue_worker(arg, false);
>> +	}
>>  }
>>
>>  static int
>> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
>> index 43ccb1c..62019c1 100644
>> --- a/app/test/test_eventdev.c
>> +++ b/app/test/test_eventdev.c
>> @@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
>>  	if (!(info.event_dev_cap &
>>  	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>>  		pconf.enqueue_depth =
>info.max_event_port_enqueue_depth;
>> -		pconf.disable_implicit_release = 1;
>> +		pconf.event_port_cfg =
>> RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>  		ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
>>  		TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d",
>ret);
>> -		pconf.disable_implicit_release = 0;
>> +		pconf.event_port_cfg = 0;
>>  	}
>>
>>  	ret = rte_event_port_setup(TEST_DEV_ID,
>info.max_event_ports,
>> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c
>> b/examples/eventdev_pipeline/pipeline_worker_generic.c
>> index 42ff4ee..f70ab0c 100644
>> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
>> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
>> @@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data
>*worker_data)
>>  	struct rte_event_dev_config config = {
>>  			.nb_event_queues = nb_queues,
>>  			.nb_event_ports = nb_ports,
>> +			.nb_single_link_event_port_queues = 1,
>>  			.nb_events_limit  = 4096,
>>  			.nb_event_queue_flows = 1024,
>>  			.nb_event_port_dequeue_depth = 128,
>> @@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data
>*worker_data)
>>  			.schedule_type = cdata.queue_type,
>>  			.priority =
>RTE_EVENT_DEV_PRIORITY_NORMAL,
>>  			.nb_atomic_flows = 1024,
>> -		.nb_atomic_order_sequences = 1024,
>> +			.nb_atomic_order_sequences = 1024,
>>  	};
>>  	struct rte_event_queue_conf tx_q_conf = {
>>  			.priority =
>RTE_EVENT_DEV_PRIORITY_HIGHEST,
>> @@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data
>*worker_data)
>>  	disable_implicit_release = (dev_info.event_dev_cap &
>>
>	RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
>>
>> -	wkr_p_conf.disable_implicit_release = disable_implicit_release;
>> +	wkr_p_conf.event_port_cfg = disable_implicit_release ?
>> +		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>>
>>  	if (dev_info.max_num_events < config.nb_events_limit)
>>  		config.nb_events_limit = dev_info.max_num_events;
>> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c
>> b/examples/eventdev_pipeline/pipeline_worker_tx.c
>> index 55bb2f7..ca6cd20 100644
>> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
>> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
>> @@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct
>worker_data
>> *worker_data)
>>  	struct rte_event_dev_config config = {
>>  			.nb_event_queues = nb_queues,
>>  			.nb_event_ports = nb_ports,
>> +			.nb_single_link_event_port_queues = 0,
>>  			.nb_events_limit  = 4096,
>>  			.nb_event_queue_flows = 1024,
>>  			.nb_event_port_dequeue_depth = 128,
>> diff --git a/examples/l2fwd-event/l2fwd_event_generic.c
>b/examples/l2fwd-
>> event/l2fwd_event_generic.c
>> index 2dc95e5..9a3167c 100644
>> --- a/examples/l2fwd-event/l2fwd_event_generic.c
>> +++ b/examples/l2fwd-event/l2fwd_event_generic.c
>> @@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct
>l2fwd_resources
>> *rsrc)
>>  	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>  		event_p_conf.enqueue_depth =
>def_p_conf.enqueue_depth;
>>
>> -	event_p_conf.disable_implicit_release =
>> -		evt_rsrc->disable_implicit_release;
>> +	event_p_conf.event_port_cfg = 0;
>> +	if (evt_rsrc->disable_implicit_release)
>> +		event_p_conf.event_port_cfg |=
>> +			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>> +
>>  	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>>
>>  	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>> diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c
>b/examples/l2fwd-
>> event/l2fwd_event_internal_port.c
>> index 63d57b4..203a14c 100644
>> --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
>> +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
>> @@ -123,8 +123,10 @@
>l2fwd_event_port_setup_internal_port(struct
>> l2fwd_resources *rsrc)
>>  	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>  		event_p_conf.enqueue_depth =
>def_p_conf.enqueue_depth;
>>
>> -	event_p_conf.disable_implicit_release =
>> -		evt_rsrc->disable_implicit_release;
>> +	event_p_conf.event_port_cfg = 0;
>> +	if (evt_rsrc->disable_implicit_release)
>> +		event_p_conf.event_port_cfg |=
>> +			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>
>>  	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>>
>	event_p_id++) {
>> diff --git a/examples/l3fwd/l3fwd_event_generic.c
>> b/examples/l3fwd/l3fwd_event_generic.c
>> index f8c9843..c80573f 100644
>> --- a/examples/l3fwd/l3fwd_event_generic.c
>> +++ b/examples/l3fwd/l3fwd_event_generic.c
>> @@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
>>  	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>  		event_p_conf.enqueue_depth =
>def_p_conf.enqueue_depth;
>>
>> -	event_p_conf.disable_implicit_release =
>> -		evt_rsrc->disable_implicit_release;
>> +	event_p_conf.event_port_cfg = 0;
>> +	if (evt_rsrc->disable_implicit_release)
>> +		event_p_conf.event_port_cfg |=
>> +			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>> +
>>  	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>>
>>  	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c
>> b/examples/l3fwd/l3fwd_event_internal_port.c
>> index 03ac581..9916a7f 100644
>> --- a/examples/l3fwd/l3fwd_event_internal_port.c
>> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
>> @@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
>>  	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>  		event_p_conf.enqueue_depth =
>def_p_conf.enqueue_depth;
>>
>> -	event_p_conf.disable_implicit_release =
>> -		evt_rsrc->disable_implicit_release;
>> +	event_p_conf.event_port_cfg = 0;
>> +	if (evt_rsrc->disable_implicit_release)
>> +		event_p_conf.event_port_cfg |=
>> +			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>
>>  	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>>
>	event_p_id++) {
>> --
>> 2.6.4


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [EXT] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints
  2020-10-05 20:27  2% ` [dpdk-dev] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
  2020-10-06  8:15  0%   ` Van Haaren, Harry
@ 2020-10-12 19:06  0%   ` Pavan Nikhilesh Bhagavatula
  1 sibling, 0 replies; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2020-10-12 19:06 UTC (permalink / raw)
  To: Timothy McDaniel, Hemant Agrawal, Nipun Gupta,
	Mattias Rönnblom, Jerin Jacob Kollanukkaran, Liang Ma,
	Peter Mccarthy, Harry van Haaren, Nikhil Rao, Ray Kinsella,
	Neil Horman
  Cc: dev, erik.g.carrillo, gage.eads

>This commit implements the eventdev ABI changes required by
>the DLB PMD.
>
>Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>

For octeontx/octeontx2

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

>---
> drivers/event/dpaa/dpaa_eventdev.c             |  3 +-
> drivers/event/dpaa2/dpaa2_eventdev.c           |  5 +-
> drivers/event/dsw/dsw_evdev.c                  |  3 +-
> drivers/event/octeontx/ssovf_evdev.c           |  5 +-
> drivers/event/octeontx2/otx2_evdev.c           |  3 +-
> drivers/event/opdl/opdl_evdev.c                |  3 +-
> drivers/event/skeleton/skeleton_eventdev.c     |  5 +-
> drivers/event/sw/sw_evdev.c                    |  8 ++--
> drivers/event/sw/sw_evdev_selftest.c           |  6 +--
> lib/librte_eventdev/rte_event_eth_tx_adapter.c |  2 +-
> lib/librte_eventdev/rte_eventdev.c             | 66
>+++++++++++++++++++++++---
> lib/librte_eventdev/rte_eventdev.h             | 51 ++++++++++++++++----
> lib/librte_eventdev/rte_eventdev_pmd_pci.h     |  1 -
> lib/librte_eventdev/rte_eventdev_trace.h       |  7 +--
> lib/librte_eventdev/rte_eventdev_version.map   |  4 +-
> 15 files changed, 134 insertions(+), 38 deletions(-)
>
>diff --git a/drivers/event/dpaa/dpaa_eventdev.c
>b/drivers/event/dpaa/dpaa_eventdev.c
>index b5ae87a..07cd079 100644
>--- a/drivers/event/dpaa/dpaa_eventdev.c
>+++ b/drivers/event/dpaa/dpaa_eventdev.c
>@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev
>*dev,
> 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
> 		RTE_EVENT_DEV_CAP_BURST_MODE |
> 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>-		RTE_EVENT_DEV_CAP_NONSEQ_MODE;
>+		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
> }
>
> static int
>diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c
>b/drivers/event/dpaa2/dpaa2_eventdev.c
>index 3ae4441..712db6c 100644
>--- a/drivers/event/dpaa2/dpaa2_eventdev.c
>+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
>@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev
>*dev,
> 		RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
> 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
> 		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>-		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
>+		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
>+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
>
> }
>
>@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct
>rte_eventdev *dev, uint8_t port_id,
> 		DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
> 	port_conf->enqueue_depth =
> 		DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
>-	port_conf->disable_implicit_release = 0;
>+	port_conf->event_port_cfg = 0;
> }
>
> static int
>diff --git a/drivers/event/dsw/dsw_evdev.c
>b/drivers/event/dsw/dsw_evdev.c
>index e796975..933a5a5 100644
>--- a/drivers/event/dsw/dsw_evdev.c
>+++ b/drivers/event/dsw/dsw_evdev.c
>@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev
>__rte_unused,
> 		.event_dev_cap =
>RTE_EVENT_DEV_CAP_BURST_MODE|
> 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
> 		RTE_EVENT_DEV_CAP_NONSEQ_MODE|
>-		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
>+		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
>+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
> 	};
> }
>
>diff --git a/drivers/event/octeontx/ssovf_evdev.c
>b/drivers/event/octeontx/ssovf_evdev.c
>index 4fc4e8f..1c6bcca 100644
>--- a/drivers/event/octeontx/ssovf_evdev.c
>+++ b/drivers/event/octeontx/ssovf_evdev.c
>@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct
>rte_event_dev_info *dev_info)
>
>	RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
>
>	RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>
>	RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>-
>	RTE_EVENT_DEV_CAP_NONSEQ_MODE;
>+
>	RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>+
>	RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
>
> }
>
>@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev,
>uint8_t port_id,
> 	port_conf->new_event_threshold = edev->max_num_events;
> 	port_conf->dequeue_depth = 1;
> 	port_conf->enqueue_depth = 1;
>-	port_conf->disable_implicit_release = 0;
>+	port_conf->event_port_cfg = 0;
> }
>
> static void
>diff --git a/drivers/event/octeontx2/otx2_evdev.c
>b/drivers/event/octeontx2/otx2_evdev.c
>index b8b57c3..ae35bb5 100644
>--- a/drivers/event/octeontx2/otx2_evdev.c
>+++ b/drivers/event/octeontx2/otx2_evdev.c
>@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev
>*event_dev,
>
>	RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
>
>	RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>
>	RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>-
>	RTE_EVENT_DEV_CAP_NONSEQ_MODE;
>+
>	RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>+
>	RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
> }
>
> static void
>diff --git a/drivers/event/opdl/opdl_evdev.c
>b/drivers/event/opdl/opdl_evdev.c
>index 9b2f75f..3050578 100644
>--- a/drivers/event/opdl/opdl_evdev.c
>+++ b/drivers/event/opdl/opdl_evdev.c
>@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct
>rte_event_dev_info *info)
> 		.max_event_port_dequeue_depth =
>MAX_OPDL_CONS_Q_DEPTH,
> 		.max_event_port_enqueue_depth =
>MAX_OPDL_CONS_Q_DEPTH,
> 		.max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
>-		.event_dev_cap =
>RTE_EVENT_DEV_CAP_BURST_MODE,
>+		.event_dev_cap =
>RTE_EVENT_DEV_CAP_BURST_MODE |
>+
>RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
> 	};
>
> 	*info = evdev_opdl_info;
>diff --git a/drivers/event/skeleton/skeleton_eventdev.c
>b/drivers/event/skeleton/skeleton_eventdev.c
>index c889220..6fd1102 100644
>--- a/drivers/event/skeleton/skeleton_eventdev.c
>+++ b/drivers/event/skeleton/skeleton_eventdev.c
>@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct
>rte_eventdev *dev,
> 	dev_info->max_num_events = (1ULL << 20);
> 	dev_info->event_dev_cap =
>RTE_EVENT_DEV_CAP_QUEUE_QOS |
>
>	RTE_EVENT_DEV_CAP_BURST_MODE |
>-
>	RTE_EVENT_DEV_CAP_EVENT_QOS;
>+
>	RTE_EVENT_DEV_CAP_EVENT_QOS |
>+
>	RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
> }
>
> static int
>@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct
>rte_eventdev *dev, uint8_t port_id,
> 	port_conf->new_event_threshold = 32 * 1024;
> 	port_conf->dequeue_depth = 16;
> 	port_conf->enqueue_depth = 16;
>-	port_conf->disable_implicit_release = 0;
>+	port_conf->event_port_cfg = 0;
> }
>
> static void
>diff --git a/drivers/event/sw/sw_evdev.c
>b/drivers/event/sw/sw_evdev.c
>index 98dae71..058f568 100644
>--- a/drivers/event/sw/sw_evdev.c
>+++ b/drivers/event/sw/sw_evdev.c
>@@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev,
>uint8_t port_id,
> 	}
>
> 	p->inflight_max = conf->new_event_threshold;
>-	p->implicit_release = !conf->disable_implicit_release;
>+	p->implicit_release = !(conf->event_port_cfg &
>+
>	RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>
> 	/* check if ring exists, same as rx_worker above */
> 	snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
>@@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev,
>uint8_t port_id,
> 	port_conf->new_event_threshold = 1024;
> 	port_conf->dequeue_depth = 16;
> 	port_conf->enqueue_depth = 16;
>-	port_conf->disable_implicit_release = 0;
>+	port_conf->event_port_cfg = 0;
> }
>
> static int
>@@ -615,7 +616,8 @@ sw_info_get(struct rte_eventdev *dev, struct
>rte_event_dev_info *info)
>
>	RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
>
>	RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>
>	RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>-
>	RTE_EVENT_DEV_CAP_NONSEQ_MODE),
>+				RTE_EVENT_DEV_CAP_NONSEQ_MODE
>|
>+
>	RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
> 	};
>
> 	*info = evdev_sw_info;
>diff --git a/drivers/event/sw/sw_evdev_selftest.c
>b/drivers/event/sw/sw_evdev_selftest.c
>index 38c21fa..4a7d823 100644
>--- a/drivers/event/sw/sw_evdev_selftest.c
>+++ b/drivers/event/sw/sw_evdev_selftest.c
>@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
> 			.new_event_threshold = 1024,
> 			.dequeue_depth = 32,
> 			.enqueue_depth = 64,
>-			.disable_implicit_release = 0,
> 	};
> 	if (num_ports > MAX_PORTS)
> 		return -1;
>@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
> 				.new_event_threshold = 128,
> 				.dequeue_depth = 32,
> 				.enqueue_depth = 64,
>-				.disable_implicit_release = 0,
> 		};
> 		if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
> 			printf("%d Error setting up port\n", __LINE__);
>@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
> 		.new_event_threshold = 128,
> 		.dequeue_depth = 32,
> 		.enqueue_depth = 64,
>-		.disable_implicit_release = 0,
> 	};
> 	if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
> 		printf("%d Error setting up port\n", __LINE__);
>@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t
>disable_implicit_release)
> 	 * only be initialized once - and this needs to be set for multiple
>runs
> 	 */
> 	conf.new_event_threshold = 512;
>-	conf.disable_implicit_release = disable_implicit_release;
>+	conf.event_port_cfg = disable_implicit_release ?
>+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>
> 	if (rte_event_port_setup(evdev, 0, &conf) < 0) {
> 		printf("Error setting up RX port\n");
>diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>index bb21dc4..8a72256 100644
>--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id,
>uint8_t dev_id,
> 		return ret;
> 	}
>
>-	pc->disable_implicit_release = 0;
>+	pc->event_port_cfg = 0;
> 	ret = rte_event_port_setup(dev_id, port_id, pc);
> 	if (ret) {
> 		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
>diff --git a/lib/librte_eventdev/rte_eventdev.c
>b/lib/librte_eventdev/rte_eventdev.c
>index 82c177c..3a5b738 100644
>--- a/lib/librte_eventdev/rte_eventdev.c
>+++ b/lib/librte_eventdev/rte_eventdev.c
>@@ -32,6 +32,7 @@
> #include <rte_ethdev.h>
> #include <rte_cryptodev.h>
> #include <rte_cryptodev_pmd.h>
>+#include <rte_compat.h>
>
> #include "rte_eventdev.h"
> #include "rte_eventdev_pmd.h"
>@@ -437,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
> 					dev_id);
> 		return -EINVAL;
> 	}
>-	if (dev_conf->nb_event_queues > info.max_event_queues) {
>-		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d >
>max_event_queues=%d",
>-		dev_id, dev_conf->nb_event_queues,
>info.max_event_queues);
>+	if (dev_conf->nb_event_queues > info.max_event_queues +
>+			info.max_single_link_event_port_queue_pairs)
>{
>+		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d >
>max_event_queues=%d +
>max_single_link_event_port_queue_pairs=%d",
>+				 dev_id, dev_conf->nb_event_queues,
>+				 info.max_event_queues,
>+
>info.max_single_link_event_port_queue_pairs);
>+		return -EINVAL;
>+	}
>+	if (dev_conf->nb_event_queues -
>+			dev_conf->nb_single_link_event_port_queues >
>+			info.max_event_queues) {
>+		RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d -
>nb_single_link_event_port_queues=%d > max_event_queues=%d",
>+				 dev_id, dev_conf->nb_event_queues,
>+				 dev_conf-
>>nb_single_link_event_port_queues,
>+				 info.max_event_queues);
>+		return -EINVAL;
>+	}
>+	if (dev_conf->nb_single_link_event_port_queues >
>+			dev_conf->nb_event_queues) {
>+		RTE_EDEV_LOG_ERR("dev%d
>nb_single_link_event_port_queues=%d > nb_event_queues=%d",
>+				 dev_id,
>+				 dev_conf-
>>nb_single_link_event_port_queues,
>+				 dev_conf->nb_event_queues);
> 		return -EINVAL;
> 	}
>
>@@ -448,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
> 		RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot
>be zero", dev_id);
> 		return -EINVAL;
> 	}
>-	if (dev_conf->nb_event_ports > info.max_event_ports) {
>-		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d >
>max_event_ports= %d",
>-		dev_id, dev_conf->nb_event_ports,
>info.max_event_ports);
>+	if (dev_conf->nb_event_ports > info.max_event_ports +
>+			info.max_single_link_event_port_queue_pairs)
>{
>+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d >
>max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
>+				 dev_id, dev_conf->nb_event_ports,
>+				 info.max_event_ports,
>+
>info.max_single_link_event_port_queue_pairs);
>+		return -EINVAL;
>+	}
>+	if (dev_conf->nb_event_ports -
>+			dev_conf->nb_single_link_event_port_queues
>+			> info.max_event_ports) {
>+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d -
>nb_single_link_event_port_queues=%d > max_event_ports=%d",
>+				 dev_id, dev_conf->nb_event_ports,
>+				 dev_conf-
>>nb_single_link_event_port_queues,
>+				 info.max_event_ports);
>+		return -EINVAL;
>+	}
>+
>+	if (dev_conf->nb_single_link_event_port_queues >
>+	    dev_conf->nb_event_ports) {
>+		RTE_EDEV_LOG_ERR(
>+				 "dev%d
>nb_single_link_event_port_queues=%d > nb_event_ports=%d",
>+				 dev_id,
>+				 dev_conf-
>>nb_single_link_event_port_queues,
>+				 dev_conf->nb_event_ports);
> 		return -EINVAL;
> 	}
>
>@@ -737,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t
>port_id,
> 		return -EINVAL;
> 	}
>
>-	if (port_conf && port_conf->disable_implicit_release &&
>+	if (port_conf &&
>+	    (port_conf->event_port_cfg &
>RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
> 	    !(dev->data->event_dev_cap &
> 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
> 		RTE_EDEV_LOG_ERR(
>@@ -830,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id,
>uint8_t port_id, uint32_t attr_id,
> 	case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
> 		*attr_value = dev->data-
>>ports_cfg[port_id].new_event_threshold;
> 		break;
>+	case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
>+	{
>+		uint32_t config;
>+
>+		config = dev->data-
>>ports_cfg[port_id].event_port_cfg;
>+		*attr_value = !!(config &
>RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>+		break;
>+	}
> 	default:
> 		return -EINVAL;
> 	};
>diff --git a/lib/librte_eventdev/rte_eventdev.h
>b/lib/librte_eventdev/rte_eventdev.h
>index 7dc8323..ce1fc2c 100644
>--- a/lib/librte_eventdev/rte_eventdev.h
>+++ b/lib/librte_eventdev/rte_eventdev.h
>@@ -291,6 +291,12 @@ struct rte_event;
>  * single queue to each port or map a single queue to many port.
>  */
>
>+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
>+/**< Event device preserves the flow ID from the enqueued
>+ * event to the dequeued event if the flag is set. Otherwise,
>+ * the content of this field is implementation dependent.
>+ */
>+
> /* Event device priority levels */
> #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
> /**< Highest priority expressed across eventdev subsystem
>@@ -380,6 +386,10 @@ struct rte_event_dev_info {
> 	 * event port by this device.
> 	 * A device that does not support bulk enqueue will set this as 1.
> 	 */
>+	uint8_t max_event_port_links;
>+	/**< Maximum number of queues that can be linked to a single
>event
>+	 * port by this device.
>+	 */
> 	int32_t max_num_events;
> 	/**< A *closed system* event dev has a limit on the number of
>events it
> 	 * can manage at a time. An *open system* event dev does not
>have a
>@@ -387,6 +397,12 @@ struct rte_event_dev_info {
> 	 */
> 	uint32_t event_dev_cap;
> 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
>+	uint8_t max_single_link_event_port_queue_pairs;
>+	/**< Maximum number of event ports and queues that are
>optimized for
>+	 * (and only capable of) single-link configurations supported by
>this
>+	 * device. These ports and queues are not accounted for in
>+	 * max_event_ports or max_event_queues.
>+	 */
> };
>
> /**
>@@ -494,6 +510,14 @@ struct rte_event_dev_config {
> 	 */
> 	uint32_t event_dev_cfg;
> 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
>+	uint8_t nb_single_link_event_port_queues;
>+	/**< Number of event ports and queues that will be singly-
>linked to
>+	 * each other. These are a subset of the overall event ports and
>+	 * queues; this value cannot exceed *nb_event_ports* or
>+	 * *nb_event_queues*. If the device has ports and queues that
>are
>+	 * optimized for single-link usage, this field is a hint for how
>many
>+	 * to allocate; otherwise, regular event ports and queues can be
>used.
>+	 */
> };
>
> /**
>@@ -519,7 +543,6 @@ int
> rte_event_dev_configure(uint8_t dev_id,
> 			const struct rte_event_dev_config *dev_conf);
>
>-
> /* Event queue specific APIs */
>
> /* Event queue configuration bitmap flags */
>@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id,
>uint8_t queue_id, uint32_t attr_id,
>
> /* Event port specific APIs */
>
>+/* Event port configuration bitmap flags */
>+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
>+/**< Configure the port not to release outstanding events in
>+ * rte_event_dev_dequeue_burst(). If set, all events received through
>+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE
>or
>+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
>+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>+ */
>+#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
>+/**< This event port links only to a single event queue.
>+ *
>+ *  @see rte_event_port_setup(), rte_event_port_link()
>+ */
>+
> /** Event port configuration structure */
> struct rte_event_port_conf {
> 	int32_t new_event_threshold;
>@@ -698,13 +735,7 @@ struct rte_event_port_conf {
> 	 * which previously supplied to rte_event_dev_configure().
> 	 * Ignored when device is not
>RTE_EVENT_DEV_CAP_BURST_MODE capable.
> 	 */
>-	uint8_t disable_implicit_release;
>-	/**< Configure the port not to release outstanding events in
>-	 * rte_event_dev_dequeue_burst(). If true, all events received
>through
>-	 * the port must be explicitly released with
>RTE_EVENT_OP_RELEASE or
>-	 * RTE_EVENT_OP_FORWARD. Must be false when the device is
>not
>-	 * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>-	 */
>+	uint32_t event_port_cfg; /**< Port cfg
>flags(EVENT_PORT_CFG_) */
> };
>
> /**
>@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t
>port_id,
>  * The new event threshold of the port
>  */
> #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
>+/**
>+ * The implicit release disable attribute of the port
>+ */
>+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>
> /**
>  * Get an attribute from a port.
>diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>index 443cd38..a3f9244 100644
>--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver
>*pci_drv,
> 	return -ENXIO;
> }
>
>-
> /**
>  * @internal
>  * Wrapper for use by pci drivers as a .remove function to detach a
>event
>diff --git a/lib/librte_eventdev/rte_eventdev_trace.h
>b/lib/librte_eventdev/rte_eventdev_trace.h
>index 4de6341..5ec43d8 100644
>--- a/lib/librte_eventdev/rte_eventdev_trace.h
>+++ b/lib/librte_eventdev/rte_eventdev_trace.h
>@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
> 	rte_trace_point_emit_u32(dev_conf-
>>nb_event_port_dequeue_depth);
> 	rte_trace_point_emit_u32(dev_conf-
>>nb_event_port_enqueue_depth);
> 	rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
>+	rte_trace_point_emit_u8(dev_conf-
>>nb_single_link_event_port_queues);
> 	rte_trace_point_emit_int(rc);
> )
>
>@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
> 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
> 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
> 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
>-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
>+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
> 	rte_trace_point_emit_int(rc);
> )
>
>@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
> 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
> 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
> 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
>-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
>+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
> 	rte_trace_point_emit_ptr(conf_cb);
> 	rte_trace_point_emit_int(rc);
> )
>@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
> 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
> 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
> 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
>-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
>+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
> )
>
> RTE_TRACE_POINT(
>diff --git a/lib/librte_eventdev/rte_eventdev_version.map
>b/lib/librte_eventdev/rte_eventdev_version.map
>index 3d9d0ca..2846d04 100644
>--- a/lib/librte_eventdev/rte_eventdev_version.map
>+++ b/lib/librte_eventdev/rte_eventdev_version.map
>@@ -100,7 +100,6 @@ EXPERIMENTAL {
> 	# added in 20.05
> 	__rte_eventdev_trace_configure;
> 	__rte_eventdev_trace_queue_setup;
>-	__rte_eventdev_trace_port_setup;
> 	__rte_eventdev_trace_port_link;
> 	__rte_eventdev_trace_port_unlink;
> 	__rte_eventdev_trace_start;
>@@ -134,4 +133,7 @@ EXPERIMENTAL {
> 	__rte_eventdev_trace_crypto_adapter_queue_pair_del;
> 	__rte_eventdev_trace_crypto_adapter_start;
> 	__rte_eventdev_trace_crypto_adapter_stop;
>+
>+	# changed in 20.11
>+	__rte_eventdev_trace_port_setup;
> };
>--
>2.6.4


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH V5 1/2] dpdk: resolve compiling errors for per-queue stats
  2020-10-10  8:09  0%               ` Thomas Monjalon
@ 2020-10-12 17:02  0%                 ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-10-12 17:02 UTC (permalink / raw)
  To: Thomas Monjalon, Min Hu (Connor), Honnappa Nagarahalli
  Cc: Olivier Matz, Stephen Hemminger, techboard, bruce.richardson,
	jerinj, Ray Kinsella, dev

On 10/10/2020 9:09 AM, Thomas Monjalon wrote:
> 09/10/2020 22:32, Ferruh Yigit:
>> On 10/6/2020 9:33 AM, Olivier Matz wrote:
>>> On Mon, Oct 05, 2020 at 01:23:08PM +0100, Ferruh Yigit wrote:
>>>> On 9/28/2020 4:43 PM, Stephen Hemminger wrote:
>>>>> On Mon, 28 Sep 2020 17:24:26 +0200
>>>>> Thomas Monjalon <thomas@monjalon.net> wrote:
>>>>>> 28/09/2020 15:53, Ferruh Yigit:
>>>>>>> On 9/28/2020 10:16 AM, Thomas Monjalon wrote:
>>>>>>>> 28/09/2020 10:59, Ferruh Yigit:
>>>>>>>>> On 9/27/2020 4:16 AM, Min Hu (Connor) wrote:
>>>>>>>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>>>>>>>
>>>>>>>>>> Currently, only statistics of rx/tx queues with queue_id less than
>>>>>>>>>> RTE_ETHDEV_QUEUE_STAT_CNTRS can be displayed. If there is a certain
>>>>>>>>>> application scenario that it needs to use 256 or more than 256 queues
>>>>>>>>>> and display all statistics of rx/tx queue. At this moment, we have to
>>>>>>>>>> change the macro to be equaled to the queue number.
>>>>>>>>>>
>>>>>>>>>> However, modifying the macro to be greater than 256 will trigger
>>>>>>>>>> many errors and warnings from test-pmd, PMD drivers and librte_ethdev
>>>>>>>>>> during compiling dpdk project. But it is possible and permitted that
>>>>>>>>>> rx/tx queue number is greater than 256 and all statistics of rx/tx
>>>>>>>>>> queue need to be displayed. In addition, the data type of rx/tx queue
>>>>>>>>>> number in rte_eth_dev_configure API is 'uint16_t'. So It is unreasonable
>>>>>>>>>> to use the 'uint8_t' type for variables that control which per-queue
>>>>>>>>>> statistics can be displayed.
>>>>>>>>
>>>>>>>> The explanation is too much complex and misleading.
>>>>>>>> You mean you cannot increase RTE_ETHDEV_QUEUE_STAT_CNTRS
>>>>>>>> above 256 because it is an 8-bit type?
>>>>>>>>
>>>>>>>> [...]
>>>>>>>>>> --- a/lib/librte_ethdev/rte_ethdev.h
>>>>>>>>>> +++ b/lib/librte_ethdev/rte_ethdev.h
>>>>>>>>>>       int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id,
>>>>>>>>>> -		uint16_t tx_queue_id, uint8_t stat_idx);
>>>>>>>>>> +		uint16_t tx_queue_id, uint16_t stat_idx);
>>>>>>>> [...]
>>>>>>>>>>       int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
>>>>>>>>>>       					   uint16_t rx_queue_id,
>>>>>>>>>> -					   uint8_t stat_idx);
>>>>>>>>>> +					   uint16_t stat_idx);
>>>>>>>> [...]
>>>>>>>>> cc'ed tech-board,
>>>>>>>>>
>>>>>>>>> The patch breaks the ethdev ABI without a deprecation notice from previous
>>>>>>>>> release(s).
>>>>>>>>>
>>>>>>>>> It is mainly a fix to the port_id storage type, which we have updated from
>>>>>>>>> uint8_t to uint16_t in past but some seems remained for
>>>>>>>>> 'rte_eth_dev_set_tx_queue_stats_mapping()' &
>>>>>>>>> 'rte_eth_dev_set_rx_queue_stats_mapping()' APIs.
>>>>>>>>
>>>>>>>> No, it is not related to the port id, but the number of limited stats.
>>>>>>>
>>>>>>> Right, it is not related to the port id, it is fixing the storage type for index
>>>>>>> used to map the queue stats.
>>>>>>>>> Since the ethdev library already heavily breaks the ABI this release, I am for
>>>>>>>>> getting this fix, instead of waiting the fix for one more year.
>>>>>>>>
>>>>>>>> If stats can be managed for more than 256 queues, I think it means
>>>>>>>> it is not limited. In this case, we probably don't need the API
>>>>>>>> *_queue_stats_mapping which was invented for a limitation of ixgbe.
>>>>>>>>
>>>>>>>> The problem is probably somewhere else (in testpmd),
>>>>>>>> that's why I am against this patch.
>>>>>>>
>>>>>>> This patch is not to fix queue stats mapping, I agree there are problems related
>>>>>>> to it, already shared as comment to this set.
>>>>>>>
>>>>>>> But this patch is to fix the build errors when 'RTE_ETHDEV_QUEUE_STAT_CNTRS'
>>>>>>> needs to set more than 255. Where the build errors seems around the
>>>>>>> stats_mapping APIs.
>>>>>>
>>>>>> It is not said this API is supposed to manage more than 256 queues mapping.
>>>>>> In general we should not need this API.
>>>>>> I think it is solving the wrong problem.
>>>>>
>>>>>
>>>>> The original API is a band aid for the limited number of statistics counters
>>>>> in the Intel IXGBE hardware. It crept into to the DPDK as an API. I would rather
>>>>> have per-queue statistics and make ixgbe say "not supported"
>>>>>
>>>>
>>>> The current issue is not directly related to '*_queue_stats_mapping' APIs.
>>>>
>>>> Problem is not able to set 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255.
>>>> User may need to set the 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, since it is
>>>> used to define size of the stats counter.
>>>> "uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];"
>>>>
>>>> When 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, it gives multiple build errors,
>>>> the one in the ethdev is like [1].
>>>>
>>>> This can be fixed two ways,
>>>> a) increase the size of 'stat_idx' storage type to u16 in the
>>>> '*_queue_stats_mapping' APIs, this is what this patch does.
>>>> b) Fix with a casting in the comparison, without changing the APIs.
>>>>
>>>> I think both are OK, but is (b) more preferable?
>>>
>>> I think the patch (a) is ok, knowing that RTE_ETHDEV_QUEUE_STAT_CNTRS is
>>> not modified.
>>>
>>> On the substance, I agree with Thomas that the queue_stats_mapping API
>>> should be replaced by xstats.
>>>
>>
>> This has been discussed in the last technical board meeting, the decision was to
>> use xstats to get queue related statistics [2].
>>
>> But after second look, even if xstats is used to get statistics,
>> 'RTE_ETHDEV_QUEUE_STAT_CNTRS' is used, since xstats uses 'rte_eth_stats_get()'
>> to get queue statistics.
>> So for the case device has more than 255 queues, 'RTE_ETHDEV_QUEUE_STAT_CNTRS'
>> still needs to be set > 255 which will cause the build error.
> 
> You're right, when using the old API in xstats implementation,
> we are limited to RTE_ETHDEV_QUEUE_STAT_CNTRS queues.
> 
>> I have an AR to send a deprecation notice to current method to get the queue
>> statistics, and limit the old method to 256 queues. But since xstats is just a
>> wrapped to old method, I am not quite sure how deprecating it will work.
>>
>> @Thomas, @Honnappa, can you give some more insight on the issue?
> 
> It becomes a PMD issue. The PMD implementation of xstats must complete
> the statistics for the queues above RTE_ETHDEV_QUEUE_STAT_CNTRS.
> 
> In order to prepare the removal of the old method smoothly,
> we could add a driver flag which indicates whether the PMD relies
> on a pre-fill of xstats from old stats per queue conversion or not.
> 

I have sent an RFC, can you please check:
https://patches.dpdk.org/patch/80390/


Connor,

Does this proposal make sense?
If you have more than 256 queues to get stats, can you please implement xstats 
for the queue stats?

You don't need to wait for the above RFC accepted, you can implement the xstats, 
but it will have some duplication, if the above RFC accepted you can set the 
'RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS' flag to remove the duplication.

> 
>> [2]
>> https://mails.dpdk.org/archives/dev/2020-October/185299.html
> 
> 
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 2/8] security: modify PDCP xform to support SDAP
  @ 2020-10-12 14:10  4%     ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2020-10-12 14:10 UTC (permalink / raw)
  To: dev, techboard
  Cc: thomas, anoobj, hemant.agrawal, declan.doherty, david.coyle, Akhil Goyal

The SDAP is a protocol in the LTE stack on top of PDCP for
QOS. A particular PDCP session may or may not have
SDAP enabled. But if it is enabled, SDAP header should be
authenticated but not encrypted if both confidentiality and
integrity is enabled. Hence, the driver should be intimated
from the xform so that it skip the SDAP header while encryption.

A new field is added in the PDCP xform to specify SDAP is enabled.
The overall size of the xform is not changed, as hfn_ovrd is just
a flag and does not need uint32. Hence, it is converted to uint8_t
and a 16 bit reserved field is added for future.

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 doc/guides/rel_notes/release_20_11.rst |  9 +++++++++
 lib/librte_security/rte_security.h     | 12 ++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c34ab5493..fad91487a 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -103,6 +103,11 @@ New Features
   also known as Mount Bryce.  See the
   :doc:`../bbdevs/acc100` BBDEV guide for more details on this new driver.
 
+* **Updated rte_security library to support SDAP.**
+
+  ``rte_security_pdcp_xform`` in ``rte_security`` lib is updated to enable
+  5G NR processing of SDAP header in PMDs.
+
 * **Updated Virtio driver.**
 
   * Added support for Vhost-vDPA backend to Virtio-user PMD.
@@ -307,6 +312,10 @@ API Changes
   ``rte_fpga_lte_fec_configure`` and structure ``fpga_lte_fec_conf`` to
   ``rte_fpga_lte_fec_conf``.
 
+* security: ``hfn_ovrd`` field in ``rte_security_pdcp_xform`` is changed from
+  ``uint32_t`` to ``uint8_t`` so that a new field ``sdap_enabled`` can be added
+  to support SDAP.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 16839e539..c259b35e0 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017,2019 NXP
+ * Copyright 2017,2019-2020 NXP
  * Copyright(c) 2017-2020 Intel Corporation.
  */
 
@@ -290,7 +290,15 @@ struct rte_security_pdcp_xform {
 	 * per packet HFN in place of IV. PMDs will extract the HFN
 	 * and perform operations accordingly.
 	 */
-	uint32_t hfn_ovrd;
+	uint8_t hfn_ovrd;
+	/** In case of 5G NR, a new protocol (SDAP) header may be set
+	 * inside PDCP payload which should be authenticated but not
+	 * encrypted. Hence, driver should be notified if SDAP is
+	 * enabled or not, so that SDAP header is not encrypted.
+	 */
+	uint8_t sdap_enabled;
+	/** Reserved for future */
+	uint16_t reserved;
 };
 
 /** DOCSIS direction */
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v6 4/4] doc: test-meson-builds.sh doc updates
  2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
                       ` (2 preceding siblings ...)
  2020-10-12 13:03 15%     ` [dpdk-dev] [PATCH v6 3/4] devtools: change dump file not found to warning in check-abi.sh Conor Walsh
@ 2020-10-12 13:03 18%     ` Conor Walsh
  2020-10-14  9:46  0%       ` Kinsella, Ray
  2020-10-14  9:37  4%     ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Kinsella, Ray
  2020-10-14 10:41 10%     ` [dpdk-dev] [PATCH v7 " Conor Walsh
  5 siblings, 1 reply; 200+ results
From: Conor Walsh @ 2020-10-12 13:03 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

Updates to the Checking Compilation and Checking ABI compatibility
sections of the patches part of the contribution guide

Signed-off-by: Conor Walsh <conor.walsh@intel.com>

---
 doc/guides/contributing/patches.rst | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 9ff60944c..e11d63bb0 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -470,10 +470,9 @@ The script internally checks for dependencies, then builds for several
 combinations of compilation configuration.
 By default, each build will be put in a subfolder of the current working directory.
 However, if it is preferred to place the builds in a different location,
-the environment variable ``DPDK_BUILD_TEST_DIR`` can be set to that desired location.
-For example, setting ``DPDK_BUILD_TEST_DIR=__builds`` will put all builds
-in a single subfolder called "__builds" created in the current directory.
-Setting ``DPDK_BUILD_TEST_DIR`` to an absolute directory path e.g. ``/tmp`` is also supported.
+the environment variable ``DPDK_BUILD_TEST_DIR`` or the command line argument ``-b``
+can be set to that desired location.
+Environmental variables can also be specified in ``.config/dpdk/devel.config``.
 
 
 .. _integrated_abi_check:
@@ -483,14 +482,17 @@ Checking ABI compatibility
 
 By default, ABI compatibility checks are disabled.
 
-To enable them, a reference version must be selected via the environment
-variable ``DPDK_ABI_REF_VERSION``.
-
-The ``devtools/test-build.sh`` and ``devtools/test-meson-builds.sh`` scripts
-then build this reference version in a temporary directory and store the
-results in a subfolder of the current working directory.
-The environment variable ``DPDK_ABI_REF_DIR`` can be set so that the results go
-to a different location.
+To enable ABI checks the required reference version must be set using either the
+environment variable ``DPDK_ABI_REF_VERSION`` or the command line argument ``-a``.
+The tag ``latest`` is supported, which will select the latest quarterly release.
+e.g. ``./devtools/test-meson-builds.sh -a latest``.
+
+The ``devtools/test-meson-builds.sh`` script will then either build this reference version
+or download a cached version when available in a temporary directory and store the results
+in a subfolder of the current working directory.
+The environment variable ``DPDK_ABI_REF_DIR`` or the argument ``-d`` can be set so that
+the results go to a different location.
+Environmental variables can also be specified in ``.config/dpdk/devel.config``.
 
 
 Sending Patches
-- 
2.25.1


^ permalink raw reply	[relevance 18%]

* [dpdk-dev] [PATCH v6 3/4] devtools: change dump file not found to warning in check-abi.sh
  2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
  2020-10-12 13:03 21%     ` [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
  2020-10-12 13:03 25%     ` [dpdk-dev] [PATCH v6 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
@ 2020-10-12 13:03 15%     ` Conor Walsh
  2020-10-14  9:44  4%       ` Kinsella, Ray
  2020-10-12 13:03 18%     ` [dpdk-dev] [PATCH v6 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
                       ` (2 subsequent siblings)
  5 siblings, 1 reply; 200+ results
From: Conor Walsh @ 2020-10-12 13:03 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

Change dump file not found from an error to a warning to make check-abi.sh
compatible with the changes to test-meson-builds.sh needed to use
prebuilt references.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>

---
 devtools/check-abi.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ab6748cfb..60d88777e 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -46,8 +46,7 @@ for dump in $(find $refdir -name "*.dump"); do
 	fi
 	dump2=$(find $newdir -name $name)
 	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
-		echo "Error: can't find $name in $newdir"
-		error=1
+		echo "WARNING: can't find $name in $newdir, are you building with all dependencies?"
 		continue
 	fi
 	abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
-- 
2.25.1


^ permalink raw reply	[relevance 15%]

* [dpdk-dev] [PATCH v6 2/4] devtools: abi and UX changes for test-meson-builds.sh
  2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
  2020-10-12 13:03 21%     ` [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
@ 2020-10-12 13:03 25%     ` Conor Walsh
  2020-10-14  9:43  4%       ` Kinsella, Ray
  2020-10-12 13:03 15%     ` [dpdk-dev] [PATCH v6 3/4] devtools: change dump file not found to warning in check-abi.sh Conor Walsh
                       ` (3 subsequent siblings)
  5 siblings, 1 reply; 200+ results
From: Conor Walsh @ 2020-10-12 13:03 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

This patch adds new features to test-meson-builds.sh that help to make
the process of using the script easier, the patch also includes
changes to make the abi breakage checks more performant.
Changes/Additions:
 - Command line arguments added, the changes are fully backwards
   compatible and all previous environmental variables are still supported
 - All paths supplied by user are converted to absolute paths if they
   are relative as meson has a bug that can sometimes error if a
   relative path is supplied to it.
 - abi check/generation code moved to function to improve readability
 - Only 2 abi checks will now be completed:
    - 1 x86_64 gcc or clang check
    - 1 ARM gcc or clang check
   It is not necessary to check abi breakages in every build
 - abi checks can now make use of prebuilt abi references from a http
   or local source, it is hoped these would be hosted on dpdk.org in
   the future.
Invoke using "./test-meson-builds.sh [-b <build directory>]
   [-a <dpdk tag or latest for abi check>] [-u <uri for abi references>]
   [-d <directory for abi references>]"
 - <build directory>: directory to store builds (relative or absolute)
 - <dpdk tag or latest for abi check>: dpdk tag e.g. "v20.11" or "latest"
 - <uri for abi references>: http location or directory to get prebuilt
   abi references from
 - <directory for abi references>: directory to store abi references
   (relative or absolute)
e.g. "./test-meson-builds.sh -a latest"
If no flags are specified test-meson-builds.sh will run the standard
meson tests with default options unless environmental variables are
specified.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>

---
 devtools/test-meson-builds.sh | 170 +++++++++++++++++++++++++++-------
 1 file changed, 138 insertions(+), 32 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index a87de635a..b45506fb0 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -1,12 +1,73 @@
 #! /bin/sh -e
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2018-2020 Intel Corporation
 
 # Run meson to auto-configure the various builds.
 # * all builds get put in a directory whose name starts with "build-"
 # * if a build-directory already exists we assume it was properly configured
 # Run ninja after configuration is done.
 
+# Get arguments
+usage()
+{
+	echo "Usage: $0
+	      [-b <build directory>]
+	      [-a <dpdk tag or latest for abi check>]
+	      [-u <uri for abi references>]
+	      [-d <directory for abi references>]" 1>&2; exit 1;
+}
+
+DPDK_ABI_DEFAULT_URI="http://dpdk.org/abi-refs"
+
+while getopts "a:u:d:b:h" arg; do
+	case $arg in
+	a)
+		if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+			echo "DPDK_ABI_REF_VERSION and -a cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_VERSION=${OPTARG} ;;
+	u)
+		if [ -n "$DPDK_ABI_TAR_URI" ]; then
+			echo "DPDK_ABI_TAR_URI and -u cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_TAR_URI=${OPTARG} ;;
+	d)
+		if [ -n "$DPDK_ABI_REF_DIR" ]; then
+			echo "DPDK_ABI_REF_DIR and -d cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_DIR=${OPTARG} ;;
+	b)
+		if [ -n "$DPDK_BUILD_TEST_DIR" ]; then
+			echo "DPDK_BUILD_TEST_DIR and -a cannot both be set"
+			exit 1
+		fi
+		DPDK_BUILD_TEST_DIR=${OPTARG} ;;
+	h)
+		usage ;;
+	*)
+		usage ;;
+	esac
+done
+
+if [ -n "$DPDK_ABI_REF_VERSION" ] ; then
+	if [ "$DPDK_ABI_REF_VERSION" = "latest" ] ; then
+		DPDK_ABI_REF_VERSION=$(git ls-remote --tags http://dpdk.org/git/dpdk |
+	        	sed "s/.*\///" | grep -v "r\|{}" |
+			grep '^[^.]*.[^.]*$' | tail -n 1)
+	elif [ -z "$(git ls-remote http://dpdk.org/git/dpdk refs/tags/$DPDK_ABI_REF_VERSION)" ] ; then
+		echo "$DPDK_ABI_REF_VERSION is not a valid DPDK tag"
+		exit 1
+	fi
+fi
+if [ -z $DPDK_ABI_TAR_URI ] ; then
+	DPDK_ABI_TAR_URI=$DPDK_ABI_DEFAULT_URI
+fi
+# allow the generation script to override value with env var
+abi_checks_done=${DPDK_ABI_GEN_REF:-0}
+
 # set pipefail option if possible
 PIPEFAIL=""
 set -o | grep -q pipefail && set -o pipefail && PIPEFAIL=1
@@ -16,7 +77,11 @@ srcdir=$(dirname $(readlink -f $0))/..
 
 MESON=${MESON:-meson}
 use_shared="--default-library=shared"
-builds_dir=${DPDK_BUILD_TEST_DIR:-.}
+builds_dir=${DPDK_BUILD_TEST_DIR:-$srcdir/builds}
+# ensure path is absolute meson returns error when some paths are relative
+if echo "$builds_dir" | grep -qv '^/'; then
+        builds_dir=$srcdir/$builds_dir
+fi
 
 if command -v gmake >/dev/null 2>&1 ; then
 	MAKE=gmake
@@ -123,39 +188,49 @@ install_target () # <builddir> <installdir>
 	fi
 }
 
-build () # <directory> <target compiler | cross file> <meson options>
+abi_gen_check () # no options
 {
-	targetdir=$1
-	shift
-	crossfile=
-	[ -r $1 ] && crossfile=$1 || targetcc=$1
-	shift
-	# skip build if compiler not available
-	command -v ${CC##* } >/dev/null 2>&1 || return 0
-	if [ -n "$crossfile" ] ; then
-		cross="--cross-file $crossfile"
-		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
-			$crossfile | tr -d "'" | tr -d '"')
-	else
-		cross=
+	abirefdir=${DPDK_ABI_REF_DIR:-$builds_dir/__reference}/$DPDK_ABI_REF_VERSION
+	mkdir -p $abirefdir
+	# ensure path is absolute meson returns error when some are relative
+	if echo "$abirefdir" | grep -qv '^/'; then
+		abirefdir=$srcdir/$abirefdir
 	fi
-	load_env $targetcc || return 0
-	config $srcdir $builds_dir/$targetdir $cross --werror $*
-	compile $builds_dir/$targetdir
-	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
-		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
-		if [ ! -d $abirefdir/$targetdir ]; then
+	if [ ! -d $abirefdir/$targetdir ]; then
+
+		# try to get abi reference
+		if echo "$DPDK_ABI_TAR_URI" | grep -q '^http'; then
+			if [ $abi_checks_done -gt -1 ]; then
+				if curl --head --fail --silent \
+					"$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz" \
+					>/dev/null; then
+					curl -o $abirefdir/$targetdir.tar.gz \
+					$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz
+				fi
+			fi
+		elif [ $abi_checks_done -gt -1 ]; then
+			if [ -f "$DPDK_ABI_TAR_URI/$targetdir.tar.gz" ]; then
+				cp $DPDK_ABI_TAR_URI/$targetdir.tar.gz \
+					$abirefdir/
+			fi
+		fi
+		if [ -f "$abirefdir/$targetdir.tar.gz" ]; then
+			tar -xf $abirefdir/$targetdir.tar.gz \
+				-C $abirefdir >/dev/null
+			rm -rf $abirefdir/$targetdir.tar.gz
+		# if no reference can be found then generate one
+		else
 			# clone current sources
 			if [ ! -d $abirefdir/src ]; then
 				git clone --local --no-hardlinks \
-					--single-branch \
-					-b $DPDK_ABI_REF_VERSION \
-					$srcdir $abirefdir/src
+					  --single-branch \
+					  -b $DPDK_ABI_REF_VERSION \
+					  $srcdir $abirefdir/src
 			fi
 
 			rm -rf $abirefdir/build
 			config $abirefdir/src $abirefdir/build $cross \
-				-Dexamples= $*
+			       -Dexamples= $*
 			compile $abirefdir/build
 			install_target $abirefdir/build $abirefdir/$targetdir
 			$srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
@@ -164,17 +239,46 @@ build () # <directory> <target compiler | cross file> <meson options>
 			find $abirefdir/$targetdir/usr/local -name '*.a' -delete
 			rm -rf $abirefdir/$targetdir/usr/local/bin
 			rm -rf $abirefdir/$targetdir/usr/local/share
+			rm -rf $abirefdir/$targetdir/usr/local/lib
 		fi
+	fi
 
-		install_target $builds_dir/$targetdir \
-			$(readlink -f $builds_dir/$targetdir/install)
-		$srcdir/devtools/gen-abi.sh \
-			$(readlink -f $builds_dir/$targetdir/install)
+	install_target $builds_dir/$targetdir \
+		$(readlink -f $builds_dir/$targetdir/install)
+	$srcdir/devtools/gen-abi.sh \
+		$(readlink -f $builds_dir/$targetdir/install)
+	# check abi if not generating references
+	if [ -z $DPDK_ABI_GEN_REF ] ; then
 		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
 			$(readlink -f $builds_dir/$targetdir/install)
 	fi
 }
 
+build () # <directory> <target compiler | cross file> <meson options>
+{
+	targetdir=$1
+	shift
+	crossfile=
+	[ -r $1 ] && crossfile=$1 || targetcc=$1
+	shift
+	# skip build if compiler not available
+	command -v ${CC##* } >/dev/null 2>&1 || return 0
+	if [ -n "$crossfile" ] ; then
+		cross="--cross-file $crossfile"
+		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
+			$crossfile | tr -d "'" | tr -d '"')
+	else
+		cross=
+	fi
+	load_env $targetcc || return 0
+	config $srcdir $builds_dir/$targetdir $cross --werror $*
+	compile $builds_dir/$targetdir
+	if [ -n "$DPDK_ABI_REF_VERSION" ] && [ $abi_checks_done -lt 1 ] ; then
+		abi_gen_check
+		abi_checks_done=$((abi_checks_done+1))
+	fi
+}
+
 if [ "$1" = "-vv" ] ; then
 	TEST_MESON_BUILD_VERY_VERBOSE=1
 elif [ "$1" = "-v" ] ; then
@@ -189,7 +293,7 @@ fi
 # shared and static linked builds with gcc and clang
 for c in gcc clang ; do
 	command -v $c >/dev/null 2>&1 || continue
-	for s in static shared ; do
+	for s in shared static ; do
 		export CC="$CCACHE $c"
 		build build-$c-$s $c --default-library=$s
 		unset CC
@@ -211,6 +315,8 @@ build build-x86-mingw $srcdir/config/x86/cross-mingw -Dexamples=helloworld
 
 # generic armv8a with clang as host compiler
 f=$srcdir/config/arm/arm64_armv8_linux_gcc
+# run abi checks with 1 arm build
+abi_checks_done=$((abi_checks_done-1))
 export CC="clang"
 build build-arm64-host-clang $f $use_shared
 unset CC
@@ -231,7 +337,7 @@ done
 build_path=$(readlink -f $builds_dir/build-x86-default)
 export DESTDIR=$build_path/install
 # No need to reinstall if ABI checks are enabled
-if [ -z "$DPDK_ABI_REF_VERSION" ]; then
+if [ -z "$DPDK_ABI_REF_VERSION" ] ; then
 	install_target $build_path $DESTDIR
 fi
 
-- 
2.25.1


^ permalink raw reply	[relevance 25%]

* [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives
  2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
@ 2020-10-12 13:03 21%     ` Conor Walsh
  2020-10-14  9:38  4%       ` Kinsella, Ray
  2020-10-12 13:03 25%     ` [dpdk-dev] [PATCH v6 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
                       ` (4 subsequent siblings)
  5 siblings, 1 reply; 200+ results
From: Conor Walsh @ 2020-10-12 13:03 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

This patch adds a script that generates compressed archives
containing .dump files which can be used to perform abi
breakage checking in test-meson-build.sh.
Invoke using "./gen-abi-tarballs.sh [-v <dpdk tag>]"
 - <dpdk tag>: dpdk tag e.g. "v20.11" or "latest"
e.g. "./gen-abi-tarballs.sh -v latest"
If no tag is specified, the script will default to "latest"
Using these parameters the script will produce several *.tar.gz
archives containing .dump files required to do abi breakage checking

Signed-off-by: Conor Walsh <conor.walsh@intel.com>

---
 devtools/gen-abi-tarballs.sh | 48 ++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100755 devtools/gen-abi-tarballs.sh

diff --git a/devtools/gen-abi-tarballs.sh b/devtools/gen-abi-tarballs.sh
new file mode 100755
index 000000000..bcc1beac5
--- /dev/null
+++ b/devtools/gen-abi-tarballs.sh
@@ -0,0 +1,48 @@
+#! /bin/sh -e
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+# Generate the required prebuilt ABI references for test-meson-build.sh
+
+# Get arguments
+usage() { echo "Usage: $0 [-v <dpdk tag or latest>]" 1>&2; exit 1; }
+abi_tag=
+while getopts "v:h" arg; do
+	case $arg in
+	v)
+		if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+			echo "DPDK_ABI_REF_VERSION and -v cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_VERSION=${OPTARG} ;;
+	h)
+		usage ;;
+	*)
+		usage ;;
+	esac
+done
+
+if [ -z $DPDK_ABI_REF_VERSION ] ; then
+	DPDK_ABI_REF_VERSION="latest"
+fi
+
+srcdir=$(dirname $(readlink -f $0))/..
+
+DPDK_ABI_GEN_REF=-20
+DPDK_ABI_REF_DIR=$srcdir/__abitarballs
+
+. $srcdir/devtools/test-meson-builds.sh
+
+abirefdir=$DPDK_ABI_REF_DIR/$DPDK_ABI_REF_VERSION
+
+rm -rf $abirefdir/build-*.tar.gz
+cd $abirefdir
+for f in build-* ; do
+	tar -czf $f.tar.gz $f
+done
+cp *.tar.gz ../
+rm -rf *
+mv ../*.tar.gz .
+rm -rf build-x86-default.tar.gz
+
+echo "The references for $DPDK_ABI_REF_VERSION are now available in $abirefdir"
-- 
2.25.1


^ permalink raw reply	[relevance 21%]

* [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks
  2020-10-12  8:08  9% ` [dpdk-dev] [PATCH v5 0/4] devtools: abi breakage checks Conor Walsh
                     ` (3 preceding siblings ...)
  2020-10-12  8:08 20%   ` [dpdk-dev] [PATCH v5 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
@ 2020-10-12 13:03  9%   ` Conor Walsh
  2020-10-12 13:03 21%     ` [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
                       ` (5 more replies)
  4 siblings, 6 replies; 200+ results
From: Conor Walsh @ 2020-10-12 13:03 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

This patchset will help developers discover abi breakages more easily
before upstreaming their code. Currently checking that the DPDK ABI
has not changed before up-streaming code is not intuitive and the
process is time consuming. Currently contributors must use the
test-meson-builds.sh tool, alongside some environmental variables to
test their changes. Contributors in many cases are either unaware or
unable to do this themselves, leading to a potentially serious situation
where they are unknowingly up-streaming code that breaks the ABI. These
breakages are caught by Travis, but it would be more efficient if they
were caught locally before up-streaming. This patchset introduces changes
to test-meson-builds.sh, check-abi.sh and adds a new script
gen-abi-tarballs.sh. The changes to test-meson-builds.sh include UX
changes such as adding command line arguments and allowing the use of
relative paths. Reduced the number of abi checks to just two, one for both
x86_64 and ARM, the references for these tests can now be prebuilt and
downloaded by test-meson-builds.sh, these changes will allow the tests to
run much faster. check-abi.sh is updated to use the prebuilt references.
gen-abi-tarballs.sh is a new script to generate the prebuilt abi
references used by test-meson-builds.sh, these compressed archives can be
retrieved from either a local directory or a remote http location.

---
v6: Corrected a mistake in the doc patch

v5:
 - Patchset has been completely reworked following feedback
 - Patchset is now part of test-meson-builds.sh not the meson build system

v4:
 - Reworked both Python scripts to use more native Python functions
   and modules.
 - Python scripts are now in line with how other Python scripts in
   DPDK are structured.

v3:
 - Fix for bug which now allows meson < 0.48.0 to be used
 - Various coding style changes throughout
 - Minor bug fixes to the various meson.build files

v2: Spelling mistake, corrected spelling of environmental

Conor Walsh (4):
  devtools: add generation of compressed abi dump archives
  devtools: abi and UX changes for test-meson-builds.sh
  devtools: change dump file not found to warning in check-abi.sh
  doc: test-meson-builds.sh doc updates

 devtools/check-abi.sh               |   3 +-
 devtools/gen-abi-tarballs.sh        |  48 ++++++++
 devtools/test-meson-builds.sh       | 170 ++++++++++++++++++++++------
 doc/guides/contributing/patches.rst |  26 +++--
 4 files changed, 201 insertions(+), 46 deletions(-)
 create mode 100755 devtools/gen-abi-tarballs.sh

-- 
2.25.1


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v2 3/5] cryptodev: remove crypto list end enumerators
  2020-10-12  5:15  0%     ` Kusztal, ArkadiuszX
@ 2020-10-12 11:46  0%       ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2020-10-12 11:46 UTC (permalink / raw)
  To: Kusztal, ArkadiuszX, dev; +Cc: Trahe, Fiona, ruifeng.wang, michaelsh

Hi Arek,
> Hi Akhil,
> 
> > -----Original Message-----
> > From: Akhil Goyal <akhil.goyal@nxp.com>
> > Sent: czwartek, 8 października 2020 21:58
> > To: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; dev@dpdk.org
> > Cc: Trahe, Fiona <fiona.trahe@intel.com>; ruifeng.wang@arm.com;
> > michaelsh@marvell.com
> > Subject: RE: [PATCH v2 3/5] cryptodev: remove crypto list end enumerators
> >
> > > diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> > > b/lib/librte_cryptodev/rte_crypto_sym.h
> > > index f29c98051..7a2556a9e 100644
> > > --- a/lib/librte_cryptodev/rte_crypto_sym.h
> > > +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> > > @@ -132,15 +132,12 @@ enum rte_crypto_cipher_algorithm {
> > >  	 * for m_src and m_dst in the rte_crypto_sym_op must be NULL.
> > >  	 */
> > >
> > > -	RTE_CRYPTO_CIPHER_DES_DOCSISBPI,
> > > +	RTE_CRYPTO_CIPHER_DES_DOCSISBPI
> > >  	/**< DES algorithm using modes required by
> > >  	 * DOCSIS Baseline Privacy Plus Spec.
> > >  	 * Chained mbufs are not supported in this mode, i.e. rte_mbuf.next
> > >  	 * for m_src and m_dst in the rte_crypto_sym_op must be NULL.
> > >  	 */
> > > -
> > > -	RTE_CRYPTO_CIPHER_LIST_END
> > > -
> > >  };
> >
> > Probably we should add a comment for each of the enums that we change,
> that
> > the user can define its own LIST_END = last item in the enum +1.
> > LIST_END is not added to avoid ABI breakage across releases when new algos
> > are added.
> [Arek] - I do not now if necessary, should it be some kind of guarantee that
> order and number of enumerators will not change across the releases?

Yes we should make sure in the future that the order of enums are not changed
In future. This can be mentioned in the comments. We will be adding the new algos
In the end only. Please add a comment.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v5 10/11] doc: update release notes for MLX5 L3 frag support
  2020-10-12 10:42  3%       ` [dpdk-dev] [PATCH v5 " Dekel Peled
@ 2020-10-12 10:43  8%         ` Dekel Peled
  2020-10-12 19:29  0%           ` Thomas Monjalon
  2020-10-13 13:32  3%         ` [dpdk-dev] [PATCH v6 0/5] support match on L3 fragmented packets Dekel Peled
  1 sibling, 1 reply; 200+ results
From: Dekel Peled @ 2020-10-12 10:43 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This patch updates 20.11 release notes with the changes included in
patches of this series:
1) MLX5 support of matching on IPv4/IPv6 fragmented/non-fragmented
   packets.
2) ABI change in ethdev struct rte_flow_item_ipv6.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 35dd938..9894ad6 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -148,6 +148,11 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on fragmented/non-fragmented IPv4/IPv6 packets.
 
 Removed Items
 -------------
@@ -300,6 +305,11 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * Added extensions' attributes to struct ``rte_flow_item_ipv6``.
+    A set of additional values added to struct, indicating the existence of
+    every defined extension header type.
+    Applications should use the new values for identification of existing
+    extensions in the packet header.
 
 Known Issues
 ------------
-- 
1.8.3.1


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v5 00/11] support match on L3 fragmented packets
  2020-10-07 10:53  3%     ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Dekel Peled
  2020-10-07 10:54  8%       ` [dpdk-dev] [PATCH v4 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
  2020-10-07 11:15  0%       ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Ori Kam
@ 2020-10-12 10:42  3%       ` Dekel Peled
  2020-10-12 10:43  8%         ` [dpdk-dev] [PATCH v5 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
  2020-10-13 13:32  3%         ` [dpdk-dev] [PATCH v6 0/5] support match on L3 fragmented packets Dekel Peled
  2 siblings, 2 replies; 200+ results
From: Dekel Peled @ 2020-10-12 10:42 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This series implements support of matching on packets based on the
fragmentation attribute of the packet, i.e. if packet is a fragment
of a larger packet, or the opposite - packet is not a fragment.

In ethdev, add API to support IPv6 extension headers, and specifically
the IPv6 fragment extension header item.
In MLX5 PMD, support match on IPv4 fragmented packets, IPv6 fragmented
packets, and IPv6 fragment extension header item.
Testpmd CLI is updated accordingly.
Documentation is updated accordingly.

---
v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid ABI breakage.
v4: update rte_flow documentation to clarify use of IPv6 extension header flags.
v5: update following rebase on recent ICMP changes.
---

Dekel Peled (11):
  ethdev: add extensions attributes to IPv6 item
  ethdev: add IPv6 fragment extension header item
  app/testpmd: support IPv4 fragments
  app/testpmd: support IPv6 fragments
  app/testpmd: support IPv6 fragment extension item
  net/mlx5: remove handling of ICMP fragmented packets
  net/mlx5: support match on IPv4 fragment packets
  net/mlx5: support match on IPv6 fragment packets
  net/mlx5: support match on IPv6 fragment ext. item
  doc: update release notes for MLX5 L3 frag support
  net/mlx5: enforce limitation on IPv6 next proto

 app/test-pmd/cmdline_flow.c            |  53 +++++
 doc/guides/nics/mlx5.rst               |   7 +
 doc/guides/prog_guide/rte_flow.rst     |  34 ++-
 doc/guides/rel_notes/release_20_11.rst |  10 +
 drivers/net/mlx5/mlx5_flow.c           |  62 ++++--
 drivers/net/mlx5/mlx5_flow.h           |  14 ++
 drivers/net/mlx5/mlx5_flow_dv.c        | 382 +++++++++++++++++++++++++++++----
 drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
 lib/librte_ethdev/rte_flow.c           |   1 +
 lib/librte_ethdev/rte_flow.h           |  45 +++-
 lib/librte_ip_frag/rte_ip_frag.h       |  26 +--
 lib/librte_net/rte_ip.h                |  26 ++-
 12 files changed, 579 insertions(+), 90 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v5 3/4] devtools: change dump file not found to warning in check-abi.sh
  2020-10-12  8:08  9% ` [dpdk-dev] [PATCH v5 0/4] devtools: abi breakage checks Conor Walsh
  2020-10-12  8:08 21%   ` [dpdk-dev] [PATCH v5 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
  2020-10-12  8:08 25%   ` [dpdk-dev] [PATCH v5 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
@ 2020-10-12  8:08 15%   ` Conor Walsh
  2020-10-12  8:08 20%   ` [dpdk-dev] [PATCH v5 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
  2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
  4 siblings, 0 replies; 200+ results
From: Conor Walsh @ 2020-10-12  8:08 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

Change dump file not found from an error to a warning to make check-abi.sh
compatible with the changes to test-meson-builds.sh needed to use
prebuilt references.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
---
 devtools/check-abi.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ab6748cfb..60d88777e 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -46,8 +46,7 @@ for dump in $(find $refdir -name "*.dump"); do
 	fi
 	dump2=$(find $newdir -name $name)
 	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
-		echo "Error: can't find $name in $newdir"
-		error=1
+		echo "WARNING: can't find $name in $newdir, are you building with all dependencies?"
 		continue
 	fi
 	abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
-- 
2.25.1


^ permalink raw reply	[relevance 15%]

* [dpdk-dev] [PATCH v5 4/4] doc: test-meson-builds.sh doc updates
  2020-10-12  8:08  9% ` [dpdk-dev] [PATCH v5 0/4] devtools: abi breakage checks Conor Walsh
                     ` (2 preceding siblings ...)
  2020-10-12  8:08 15%   ` [dpdk-dev] [PATCH v5 3/4] devtools: change dump file not found to warning in check-abi.sh Conor Walsh
@ 2020-10-12  8:08 20%   ` Conor Walsh
  2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
  4 siblings, 0 replies; 200+ results
From: Conor Walsh @ 2020-10-12  8:08 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

Updates to the Checking Compilation and Checking ABI compatibility
sections of the patches part of the contribution guide

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
---
 doc/guides/contributing/patches.rst | 43 ++++++++++++++++++++++-------
 1 file changed, 33 insertions(+), 10 deletions(-)

diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 9ff60944c..d45bb5ce1 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -470,10 +470,9 @@ The script internally checks for dependencies, then builds for several
 combinations of compilation configuration.
 By default, each build will be put in a subfolder of the current working directory.
 However, if it is preferred to place the builds in a different location,
-the environment variable ``DPDK_BUILD_TEST_DIR`` can be set to that desired location.
-For example, setting ``DPDK_BUILD_TEST_DIR=__builds`` will put all builds
-in a single subfolder called "__builds" created in the current directory.
-Setting ``DPDK_BUILD_TEST_DIR`` to an absolute directory path e.g. ``/tmp`` is also supported.
+the environment variable ``DPDK_BUILD_TEST_DIR`` or the command line argument ``-b``
+can be set to that desired location.
+Environmental variables can also be specified in ``.config/dpdk/devel.config``.
 
 
 .. _integrated_abi_check:
@@ -483,14 +482,38 @@ Checking ABI compatibility
 
 By default, ABI compatibility checks are disabled.
 
-To enable them, a reference version must be selected via the environment
-variable ``DPDK_ABI_REF_VERSION``.
+To enable ABI checks the required reference version must be set using either the
+environment variable ``DPDK_ABI_REF_VERSION`` or the command line argument ``-a``.
+The tag ``latest`` is supported, which will select the latest quarterly release.
+e.g. ``./devtools/test-meson-builds.sh -a latest``.
 
-The ``devtools/test-build.sh`` and ``devtools/test-meson-builds.sh`` scripts
-then build this reference version in a temporary directory and store the
+The ``devtools/test-meson-builds.sh`` script will then either build this reference version
+or download a cached version when available in a temporary directory and store the results
+in a subfolder of the current working directory.
+The environment variable ``DPDK_ABI_REF_DIR`` or the argument ``-d`` can be set so that
+the results go to a different location.
+Environmental variables can also be specified in ``.config/dpdk/devel.config``.
+
+
+.. _integrated_abi_check:
+
+Checking ABI compatibility
+--------------------------
+
+By default, ABI compatibility checks are disabled.
+
+To enable ABI checks the required reference version must be set using either
+the environment variable ``DPDK_ABI_REF_VERSION`` or the argument ``-a``.
+The tag ``latest`` is supported, which will select the latest quarterly release.
+e.g. ``./devtools/test-meson-builds.sh -a latest``.
+
+The ``devtools/test-meson-builds.sh`` script will either build this reference version
+or download a cached version if available in a temporary directory and store the
 results in a subfolder of the current working directory.
-The environment variable ``DPDK_ABI_REF_DIR`` can be set so that the results go
-to a different location.
+The environment variable ``DPDK_ABI_REF_DIR`` or the argument ``-d`` can be set so that
+the results go to a different location.
+The environment variable ``DPDK_ABI_TAR_URI`` or the argument ``-u`` can be set to select
+either a remote http location or local directory to download prebuilt ABI references from.
 
 
 Sending Patches
-- 
2.25.1


^ permalink raw reply	[relevance 20%]

* [dpdk-dev] [PATCH v5 2/4] devtools: abi and UX changes for test-meson-builds.sh
  2020-10-12  8:08  9% ` [dpdk-dev] [PATCH v5 0/4] devtools: abi breakage checks Conor Walsh
  2020-10-12  8:08 21%   ` [dpdk-dev] [PATCH v5 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
@ 2020-10-12  8:08 25%   ` Conor Walsh
  2020-10-12  8:08 15%   ` [dpdk-dev] [PATCH v5 3/4] devtools: change dump file not found to warning in check-abi.sh Conor Walsh
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 200+ results
From: Conor Walsh @ 2020-10-12  8:08 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

This patch adds new features to test-meson-builds.sh that help to make
the process of using the script easier, the patch also includes
changes to make the abi breakage checks more performant.
Changes/Additions:
 - Command line arguments added, the changes are fully backwards
   compatible and all previous environmental variables are still supported
 - All paths supplied by user are converted to absolute paths if they
   are relative as meson has a bug that can sometimes error if a
   relative path is supplied to it.
 - abi check/generation code moved to function to improve readability
 - Only 2 abi checks will now be completed:
    - 1 x86_64 gcc or clang check
    - 1 ARM gcc or clang check
   It is not necessary to check abi breakages in every build
 - abi checks can now make use of prebuilt abi references from a http
   or local source, it is hoped these would be hosted on dpdk.org in
   the future.
Invoke using "./test-meson-builds.sh [-b <build directory>]
   [-a <dpdk tag or latest for abi check>] [-u <uri for abi references>]
   [-d <directory for abi references>]"
 - <build directory>: directory to store builds (relative or absolute)
 - <dpdk tag or latest for abi check>: dpdk tag e.g. "v20.11" or "latest"
 - <uri for abi references>: http location or directory to get prebuilt
   abi references from
 - <directory for abi references>: directory to store abi references
   (relative or absolute)
e.g. "./test-meson-builds.sh -a latest"
If no flags are specified test-meson-builds.sh will run the standard
meson tests with default options unless environmental variables are
specified.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
---
 devtools/test-meson-builds.sh | 170 +++++++++++++++++++++++++++-------
 1 file changed, 138 insertions(+), 32 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index a87de635a..b45506fb0 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -1,12 +1,73 @@
 #! /bin/sh -e
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2018-2020 Intel Corporation
 
 # Run meson to auto-configure the various builds.
 # * all builds get put in a directory whose name starts with "build-"
 # * if a build-directory already exists we assume it was properly configured
 # Run ninja after configuration is done.
 
+# Get arguments
+usage()
+{
+	echo "Usage: $0
+	      [-b <build directory>]
+	      [-a <dpdk tag or latest for abi check>]
+	      [-u <uri for abi references>]
+	      [-d <directory for abi references>]" 1>&2; exit 1;
+}
+
+DPDK_ABI_DEFAULT_URI="http://dpdk.org/abi-refs"
+
+while getopts "a:u:d:b:h" arg; do
+	case $arg in
+	a)
+		if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+			echo "DPDK_ABI_REF_VERSION and -a cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_VERSION=${OPTARG} ;;
+	u)
+		if [ -n "$DPDK_ABI_TAR_URI" ]; then
+			echo "DPDK_ABI_TAR_URI and -u cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_TAR_URI=${OPTARG} ;;
+	d)
+		if [ -n "$DPDK_ABI_REF_DIR" ]; then
+			echo "DPDK_ABI_REF_DIR and -d cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_DIR=${OPTARG} ;;
+	b)
+		if [ -n "$DPDK_BUILD_TEST_DIR" ]; then
+			echo "DPDK_BUILD_TEST_DIR and -a cannot both be set"
+			exit 1
+		fi
+		DPDK_BUILD_TEST_DIR=${OPTARG} ;;
+	h)
+		usage ;;
+	*)
+		usage ;;
+	esac
+done
+
+if [ -n "$DPDK_ABI_REF_VERSION" ] ; then
+	if [ "$DPDK_ABI_REF_VERSION" = "latest" ] ; then
+		DPDK_ABI_REF_VERSION=$(git ls-remote --tags http://dpdk.org/git/dpdk |
+	        	sed "s/.*\///" | grep -v "r\|{}" |
+			grep '^[^.]*.[^.]*$' | tail -n 1)
+	elif [ -z "$(git ls-remote http://dpdk.org/git/dpdk refs/tags/$DPDK_ABI_REF_VERSION)" ] ; then
+		echo "$DPDK_ABI_REF_VERSION is not a valid DPDK tag"
+		exit 1
+	fi
+fi
+if [ -z $DPDK_ABI_TAR_URI ] ; then
+	DPDK_ABI_TAR_URI=$DPDK_ABI_DEFAULT_URI
+fi
+# allow the generation script to override value with env var
+abi_checks_done=${DPDK_ABI_GEN_REF:-0}
+
 # set pipefail option if possible
 PIPEFAIL=""
 set -o | grep -q pipefail && set -o pipefail && PIPEFAIL=1
@@ -16,7 +77,11 @@ srcdir=$(dirname $(readlink -f $0))/..
 
 MESON=${MESON:-meson}
 use_shared="--default-library=shared"
-builds_dir=${DPDK_BUILD_TEST_DIR:-.}
+builds_dir=${DPDK_BUILD_TEST_DIR:-$srcdir/builds}
+# ensure path is absolute meson returns error when some paths are relative
+if echo "$builds_dir" | grep -qv '^/'; then
+        builds_dir=$srcdir/$builds_dir
+fi
 
 if command -v gmake >/dev/null 2>&1 ; then
 	MAKE=gmake
@@ -123,39 +188,49 @@ install_target () # <builddir> <installdir>
 	fi
 }
 
-build () # <directory> <target compiler | cross file> <meson options>
+abi_gen_check () # no options
 {
-	targetdir=$1
-	shift
-	crossfile=
-	[ -r $1 ] && crossfile=$1 || targetcc=$1
-	shift
-	# skip build if compiler not available
-	command -v ${CC##* } >/dev/null 2>&1 || return 0
-	if [ -n "$crossfile" ] ; then
-		cross="--cross-file $crossfile"
-		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
-			$crossfile | tr -d "'" | tr -d '"')
-	else
-		cross=
+	abirefdir=${DPDK_ABI_REF_DIR:-$builds_dir/__reference}/$DPDK_ABI_REF_VERSION
+	mkdir -p $abirefdir
+	# ensure path is absolute meson returns error when some are relative
+	if echo "$abirefdir" | grep -qv '^/'; then
+		abirefdir=$srcdir/$abirefdir
 	fi
-	load_env $targetcc || return 0
-	config $srcdir $builds_dir/$targetdir $cross --werror $*
-	compile $builds_dir/$targetdir
-	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
-		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
-		if [ ! -d $abirefdir/$targetdir ]; then
+	if [ ! -d $abirefdir/$targetdir ]; then
+
+		# try to get abi reference
+		if echo "$DPDK_ABI_TAR_URI" | grep -q '^http'; then
+			if [ $abi_checks_done -gt -1 ]; then
+				if curl --head --fail --silent \
+					"$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz" \
+					>/dev/null; then
+					curl -o $abirefdir/$targetdir.tar.gz \
+					$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz
+				fi
+			fi
+		elif [ $abi_checks_done -gt -1 ]; then
+			if [ -f "$DPDK_ABI_TAR_URI/$targetdir.tar.gz" ]; then
+				cp $DPDK_ABI_TAR_URI/$targetdir.tar.gz \
+					$abirefdir/
+			fi
+		fi
+		if [ -f "$abirefdir/$targetdir.tar.gz" ]; then
+			tar -xf $abirefdir/$targetdir.tar.gz \
+				-C $abirefdir >/dev/null
+			rm -rf $abirefdir/$targetdir.tar.gz
+		# if no reference can be found then generate one
+		else
 			# clone current sources
 			if [ ! -d $abirefdir/src ]; then
 				git clone --local --no-hardlinks \
-					--single-branch \
-					-b $DPDK_ABI_REF_VERSION \
-					$srcdir $abirefdir/src
+					  --single-branch \
+					  -b $DPDK_ABI_REF_VERSION \
+					  $srcdir $abirefdir/src
 			fi
 
 			rm -rf $abirefdir/build
 			config $abirefdir/src $abirefdir/build $cross \
-				-Dexamples= $*
+			       -Dexamples= $*
 			compile $abirefdir/build
 			install_target $abirefdir/build $abirefdir/$targetdir
 			$srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
@@ -164,17 +239,46 @@ build () # <directory> <target compiler | cross file> <meson options>
 			find $abirefdir/$targetdir/usr/local -name '*.a' -delete
 			rm -rf $abirefdir/$targetdir/usr/local/bin
 			rm -rf $abirefdir/$targetdir/usr/local/share
+			rm -rf $abirefdir/$targetdir/usr/local/lib
 		fi
+	fi
 
-		install_target $builds_dir/$targetdir \
-			$(readlink -f $builds_dir/$targetdir/install)
-		$srcdir/devtools/gen-abi.sh \
-			$(readlink -f $builds_dir/$targetdir/install)
+	install_target $builds_dir/$targetdir \
+		$(readlink -f $builds_dir/$targetdir/install)
+	$srcdir/devtools/gen-abi.sh \
+		$(readlink -f $builds_dir/$targetdir/install)
+	# check abi if not generating references
+	if [ -z $DPDK_ABI_GEN_REF ] ; then
 		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
 			$(readlink -f $builds_dir/$targetdir/install)
 	fi
 }
 
+build () # <directory> <target compiler | cross file> <meson options>
+{
+	targetdir=$1
+	shift
+	crossfile=
+	[ -r $1 ] && crossfile=$1 || targetcc=$1
+	shift
+	# skip build if compiler not available
+	command -v ${CC##* } >/dev/null 2>&1 || return 0
+	if [ -n "$crossfile" ] ; then
+		cross="--cross-file $crossfile"
+		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
+			$crossfile | tr -d "'" | tr -d '"')
+	else
+		cross=
+	fi
+	load_env $targetcc || return 0
+	config $srcdir $builds_dir/$targetdir $cross --werror $*
+	compile $builds_dir/$targetdir
+	if [ -n "$DPDK_ABI_REF_VERSION" ] && [ $abi_checks_done -lt 1 ] ; then
+		abi_gen_check
+		abi_checks_done=$((abi_checks_done+1))
+	fi
+}
+
 if [ "$1" = "-vv" ] ; then
 	TEST_MESON_BUILD_VERY_VERBOSE=1
 elif [ "$1" = "-v" ] ; then
@@ -189,7 +293,7 @@ fi
 # shared and static linked builds with gcc and clang
 for c in gcc clang ; do
 	command -v $c >/dev/null 2>&1 || continue
-	for s in static shared ; do
+	for s in shared static ; do
 		export CC="$CCACHE $c"
 		build build-$c-$s $c --default-library=$s
 		unset CC
@@ -211,6 +315,8 @@ build build-x86-mingw $srcdir/config/x86/cross-mingw -Dexamples=helloworld
 
 # generic armv8a with clang as host compiler
 f=$srcdir/config/arm/arm64_armv8_linux_gcc
+# run abi checks with 1 arm build
+abi_checks_done=$((abi_checks_done-1))
 export CC="clang"
 build build-arm64-host-clang $f $use_shared
 unset CC
@@ -231,7 +337,7 @@ done
 build_path=$(readlink -f $builds_dir/build-x86-default)
 export DESTDIR=$build_path/install
 # No need to reinstall if ABI checks are enabled
-if [ -z "$DPDK_ABI_REF_VERSION" ]; then
+if [ -z "$DPDK_ABI_REF_VERSION" ] ; then
 	install_target $build_path $DESTDIR
 fi
 
-- 
2.25.1


^ permalink raw reply	[relevance 25%]

* [dpdk-dev] [PATCH v5 1/4] devtools: add generation of compressed abi dump archives
  2020-10-12  8:08  9% ` [dpdk-dev] [PATCH v5 0/4] devtools: abi breakage checks Conor Walsh
@ 2020-10-12  8:08 21%   ` Conor Walsh
  2020-10-12  8:08 25%   ` [dpdk-dev] [PATCH v5 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 200+ results
From: Conor Walsh @ 2020-10-12  8:08 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

This patch adds a script that generates compressed archives
containing .dump files which can be used to perform abi
breakage checking in test-meson-build.sh.
Invoke using "./gen-abi-tarballs.sh [-v <dpdk tag>]"
 - <dpdk tag>: dpdk tag e.g. "v20.11" or "latest"
e.g. "./gen-abi-tarballs.sh -v latest"
If no tag is specified, the script will default to "latest"
Using these parameters the script will produce several *.tar.gz
archives containing .dump files required to do abi breakage checking

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
---
 devtools/gen-abi-tarballs.sh | 48 ++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100755 devtools/gen-abi-tarballs.sh

diff --git a/devtools/gen-abi-tarballs.sh b/devtools/gen-abi-tarballs.sh
new file mode 100755
index 000000000..bcc1beac5
--- /dev/null
+++ b/devtools/gen-abi-tarballs.sh
@@ -0,0 +1,48 @@
+#! /bin/sh -e
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+# Generate the required prebuilt ABI references for test-meson-build.sh
+
+# Get arguments
+usage() { echo "Usage: $0 [-v <dpdk tag or latest>]" 1>&2; exit 1; }
+abi_tag=
+while getopts "v:h" arg; do
+	case $arg in
+	v)
+		if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+			echo "DPDK_ABI_REF_VERSION and -v cannot both be set"
+			exit 1
+		fi
+		DPDK_ABI_REF_VERSION=${OPTARG} ;;
+	h)
+		usage ;;
+	*)
+		usage ;;
+	esac
+done
+
+if [ -z $DPDK_ABI_REF_VERSION ] ; then
+	DPDK_ABI_REF_VERSION="latest"
+fi
+
+srcdir=$(dirname $(readlink -f $0))/..
+
+DPDK_ABI_GEN_REF=-20
+DPDK_ABI_REF_DIR=$srcdir/__abitarballs
+
+. $srcdir/devtools/test-meson-builds.sh
+
+abirefdir=$DPDK_ABI_REF_DIR/$DPDK_ABI_REF_VERSION
+
+rm -rf $abirefdir/build-*.tar.gz
+cd $abirefdir
+for f in build-* ; do
+	tar -czf $f.tar.gz $f
+done
+cp *.tar.gz ../
+rm -rf *
+mv ../*.tar.gz .
+rm -rf build-x86-default.tar.gz
+
+echo "The references for $DPDK_ABI_REF_VERSION are now available in $abirefdir"
-- 
2.25.1


^ permalink raw reply	[relevance 21%]

* [dpdk-dev] [PATCH v5 0/4] devtools: abi breakage checks
  @ 2020-10-12  8:08  9% ` Conor Walsh
  2020-10-12  8:08 21%   ` [dpdk-dev] [PATCH v5 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
                     ` (4 more replies)
  0 siblings, 5 replies; 200+ results
From: Conor Walsh @ 2020-10-12  8:08 UTC (permalink / raw)
  To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh

This patchset will help developers discover abi breakages more easily
before upstreaming their code. Currently checking that the DPDK ABI
has not changed before up-streaming code is not intuitive and the
process is time consuming. Currently contributors must use the
test-meson-builds.sh tool, alongside some environmental variables to
test their changes. Contributors in many cases are either unaware or
unable to do this themselves, leading to a potentially serious situation
where they are unknowingly up-streaming code that breaks the ABI. These
breakages are caught by Travis, but it would be more efficient if they
were caught locally before up-streaming. This patchset introduces changes
to test-meson-builds.sh, check-abi.sh and adds a new script
gen-abi-tarballs.sh. The changes to test-meson-builds.sh include UX
changes such as adding command line arguments and allowing the use of
relative paths. Reduced the number of abi checks to just two, one for both
x86_64 and ARM, the references for these tests can now be prebuilt and
downloaded by test-meson-builds.sh, these changes will allow the tests to
run much faster. check-abi.sh is updated to use the prebuilt references.
gen-abi-tarballs.sh is a new script to generate the prebuilt abi
references used by test-meson-builds.sh, these compressed archives can be
retrieved from either a local directory or a remote http location.

---
v5:
 - Patchset has been completely reworked following feedback
 - Patchset is now part of test-meson-builds.sh not the meson build system

v4:
 - Reworked both Python scripts to use more native Python functions
   and modules.
 - Python scripts are now in line with how other Python scripts in
   DPDK are structured.

v3:
 - Fix for bug which now allows meson < 0.48.0 to be used
 - Various coding style changes throughout
 - Minor bug fixes to the various meson.build files

v2: Spelling mistake, corrected spelling of environmental

Conor Walsh (4):
  devtools: add generation of compressed abi dump archives
  devtools: abi and UX changes for test-meson-builds.sh
  devtools: change dump file not found to warning in check-abi.sh
  doc: test-meson-builds.sh doc updates

 devtools/check-abi.sh               |   3 +-
 devtools/gen-abi-tarballs.sh        |  48 ++++++++
 devtools/test-meson-builds.sh       | 170 ++++++++++++++++++++++------
 doc/guides/contributing/patches.rst |  43 +++++--
 4 files changed, 220 insertions(+), 44 deletions(-)
 create mode 100755 devtools/gen-abi-tarballs.sh

-- 
2.25.1


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth
  2020-10-11 20:11  0%     ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Thomas Monjalon
@ 2020-10-12  5:24  0%       ` Dharmappa, Savinay
  2020-10-12 23:08  0%         ` Dharmappa, Savinay
  0 siblings, 1 reply; 200+ results
From: Dharmappa, Savinay @ 2020-10-12  5:24 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Dumitrescu, Cristian, Singh, Jasvinder, dev


09/10/2020 14:39, Savinay Dharmappa:
> DPDK sched library allows runtime configuration of the pipe profiles 
> to the pipes of the subport once scheduler hierarchy is constructed. 
> However, to change the subport level bandwidth, existing hierarchy 
> needs to be dismantled and whole process of building hierarchy under 
> subport nodes needs to be repeated which might result in router 
> downtime. Furthermore, due to lack of dynamic configuration of the 
> subport bandwidth profile configuration (shaper and Traffic class 
> rates), the user application is unable to dynamically re-distribute 
> the excess-bandwidth of one subport among other subports in the 
> scheduler hierarchy. Therefore, it is also not possible to adjust the 
> subport bandwidth profile in sync with dynamic changes in pipe 
> profiles of subscribers who want to consume higher bandwidth opportunistically.
> 
> This patch series implements dynamic configuration of the subport 
> bandwidth profile to overcome the runtime situation when group of 
> subscribers are not using the allotted bandwidth and dynamic bandwidth 
> re-distribution is needed the without making any structural changes in the hierarchy.
> 
> The implementation work includes refactoring the existing api and data 
> structures defined for port and subport level, new APIs for adding 
> subport level bandwidth profiles that can be used in runtime.
> 
> ---
> v8 -> v9
>    - updated ABI section in release notes.
>    - Addressed review comments from patch 8
>      of v8.

I was asking a question in my reply to v8 but you didn't hit the "reply" button.
>> sorry for that. All the question raised by you were relevant so I addressed them and sent out v9. 

One more question: why don't you keep the ack given by Cristian in v7?
>> I am carrying ack given Cristian in v9, but It is at the bottom of cover letter.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 3/5] cryptodev: remove crypto list end enumerators
  2020-10-08 19:58  3%   ` Akhil Goyal
@ 2020-10-12  5:15  0%     ` Kusztal, ArkadiuszX
  2020-10-12 11:46  0%       ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Kusztal, ArkadiuszX @ 2020-10-12  5:15 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Trahe, Fiona, ruifeng.wang, michaelsh

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: czwartek, 8 października 2020 21:58
> To: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; dev@dpdk.org
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; ruifeng.wang@arm.com;
> michaelsh@marvell.com
> Subject: RE: [PATCH v2 3/5] cryptodev: remove crypto list end enumerators
> 
> > diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> > b/lib/librte_cryptodev/rte_crypto_sym.h
> > index f29c98051..7a2556a9e 100644
> > --- a/lib/librte_cryptodev/rte_crypto_sym.h
> > +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> > @@ -132,15 +132,12 @@ enum rte_crypto_cipher_algorithm {
> >  	 * for m_src and m_dst in the rte_crypto_sym_op must be NULL.
> >  	 */
> >
> > -	RTE_CRYPTO_CIPHER_DES_DOCSISBPI,
> > +	RTE_CRYPTO_CIPHER_DES_DOCSISBPI
> >  	/**< DES algorithm using modes required by
> >  	 * DOCSIS Baseline Privacy Plus Spec.
> >  	 * Chained mbufs are not supported in this mode, i.e. rte_mbuf.next
> >  	 * for m_src and m_dst in the rte_crypto_sym_op must be NULL.
> >  	 */
> > -
> > -	RTE_CRYPTO_CIPHER_LIST_END
> > -
> >  };
> 
> Probably we should add a comment for each of the enums that we change, that
> the user can define its own LIST_END = last item in the enum +1.
> LIST_END is not added to avoid ABI breakage across releases when new algos
> are added.
[Arek] - I do not now if necessary, should it be some kind of guarantee that order and number of enumerators will not change across the releases?
> 
> >
> >  /** Cipher algorithm name strings */
> > @@ -312,10 +309,8 @@ enum rte_crypto_auth_algorithm {
> >  	/**< HMAC using 384 bit SHA3 algorithm. */
> >  	RTE_CRYPTO_AUTH_SHA3_512,
> >  	/**< 512 bit SHA3 algorithm. */
> > -	RTE_CRYPTO_AUTH_SHA3_512_HMAC,
> > +	RTE_CRYPTO_AUTH_SHA3_512_HMAC
> >  	/**< HMAC using 512 bit SHA3 algorithm. */
> > -
> > -	RTE_CRYPTO_AUTH_LIST_END
> >  };
> >
> >  /** Authentication algorithm name strings */ @@ -412,9 +407,8 @@ enum
> > rte_crypto_aead_algorithm {
> >  	/**< AES algorithm in CCM mode. */
> >  	RTE_CRYPTO_AEAD_AES_GCM,
> >  	/**< AES algorithm in GCM mode. */
> > -	RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
> > +	RTE_CRYPTO_AEAD_CHACHA20_POLY1305
> >  	/**< Chacha20 cipher with poly1305 authenticator */
> > -	RTE_CRYPTO_AEAD_LIST_END
> >  };
> >
> >  /** AEAD algorithm name strings */
> > --
> > 2.17.1


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 2/8] security: modify PDCP xform to support SDAP
  @ 2020-10-11 21:33  4%   ` Akhil Goyal
    1 sibling, 0 replies; 200+ results
From: Akhil Goyal @ 2020-10-11 21:33 UTC (permalink / raw)
  To: dev, techboard
  Cc: thomas, anoobj, hemant.agrawal, declan.doherty, david.coyle, Akhil Goyal

The SDAP is a protocol in the LTE stack on top of PDCP for
QOS. A particular PDCP session may or may not have
SDAP enabled. But if it is enabled, SDAP header should be
authenticated but not encrypted if both confidentiality and
integrity is enabled. Hence, the driver should be intimated
from the xform so that it skip the SDAP header while encryption.

A new field is added in the PDCP xform to specify SDAP is enabled.
The overall size of the xform is not changed, as hfn_ovrd is just
a flag and does not need uint32. Hence, it is converted to uint8_t
and a 16 bit reserved field is added for future.

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 doc/guides/rel_notes/release_20_11.rst |  9 +++++++++
 lib/librte_security/rte_security.h     | 12 ++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c34ab5493..fad91487a 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -103,6 +103,11 @@ New Features
   also known as Mount Bryce.  See the
   :doc:`../bbdevs/acc100` BBDEV guide for more details on this new driver.
 
+* **Updated rte_security library to support SDAP.**
+
+  ``rte_security_pdcp_xform`` in ``rte_security`` lib is updated to enable
+  5G NR processing of SDAP header in PMDs.
+
 * **Updated Virtio driver.**
 
   * Added support for Vhost-vDPA backend to Virtio-user PMD.
@@ -307,6 +312,10 @@ API Changes
   ``rte_fpga_lte_fec_configure`` and structure ``fpga_lte_fec_conf`` to
   ``rte_fpga_lte_fec_conf``.
 
+* security: ``hfn_ovrd`` field in ``rte_security_pdcp_xform`` is changed from
+  ``uint32_t`` to ``uint8_t`` so that a new field ``sdap_enabled`` can be added
+  to support SDAP.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 16839e539..c259b35e0 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017,2019 NXP
+ * Copyright 2017,2019-2020 NXP
  * Copyright(c) 2017-2020 Intel Corporation.
  */
 
@@ -290,7 +290,15 @@ struct rte_security_pdcp_xform {
 	 * per packet HFN in place of IV. PMDs will extract the HFN
 	 * and perform operations accordingly.
 	 */
-	uint32_t hfn_ovrd;
+	uint8_t hfn_ovrd;
+	/** In case of 5G NR, a new protocol (SDAP) header may be set
+	 * inside PDCP payload which should be authenticated but not
+	 * encrypted. Hence, driver should be notified if SDAP is
+	 * enabled or not, so that SDAP header is not encrypted.
+	 */
+	uint8_t sdap_enabled;
+	/** Reserved for future */
+	uint16_t reserved;
 };
 
 /** DOCSIS direction */
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth
  2020-10-09 12:39  3%   ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Savinay Dharmappa
                       ` (2 preceding siblings ...)
  2020-10-09 12:39  5%     ` [dpdk-dev] [PATCH v9 8/8] sched: remove redundant code Savinay Dharmappa
@ 2020-10-11 20:11  0%     ` Thomas Monjalon
  2020-10-12  5:24  0%       ` Dharmappa, Savinay
  3 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-10-11 20:11 UTC (permalink / raw)
  To: Savinay Dharmappa; +Cc: cristian.dumitrescu, jasvinder.singh, dev

09/10/2020 14:39, Savinay Dharmappa:
> DPDK sched library allows runtime configuration of the pipe profiles to the
> pipes of the subport once scheduler hierarchy is constructed. However, to
> change the subport level bandwidth, existing hierarchy needs to be
> dismantled and whole process of building hierarchy under subport nodes
> needs to be repeated which might result in router downtime. Furthermore,
> due to lack of dynamic configuration of the subport bandwidth profile
> configuration (shaper and Traffic class rates), the user application
> is unable to dynamically re-distribute the excess-bandwidth of one subport
> among other subports in the scheduler hierarchy. Therefore, it is also not
> possible to adjust the subport bandwidth profile in sync with dynamic
> changes in pipe profiles of subscribers who want to consume higher
> bandwidth opportunistically.
> 
> This patch series implements dynamic configuration of the subport bandwidth
> profile to overcome the runtime situation when group of subscribers are not
> using the allotted bandwidth and dynamic bandwidth re-distribution is
> needed the without making any structural changes in the hierarchy.
> 
> The implementation work includes refactoring the existing api and
> data structures defined for port and subport level, new APIs for
> adding subport level bandwidth profiles that can be used in runtime.
> 
> ---
> v8 -> v9
>    - updated ABI section in release notes.
>    - Addressed review comments from patch 8
>      of v8.

I was asking a question in my reply to v8 but you didn't hit the "reply" button.

One more question: why don't you keep the ack given by Cristian in v7?




^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [dpdk-dev v13 1/4] cryptodev: change crypto symmetric vector structure
  @ 2020-10-11  0:38  3%       ` Fan Zhang
  0 siblings, 0 replies; 200+ results
From: Fan Zhang @ 2020-10-11  0:38 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, Fan Zhang

This patch updates ``rte_crypto_sym_vec`` structure to add
support for both cpu_crypto synchrounous operation and
asynchronous raw data-path APIs. The patch also includes
AESNI-MB and AESNI-GCM PMD changes, unit test changes and
documentation updates.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 app/test/test_cryptodev.c                  | 25 ++++++++------
 doc/guides/prog_guide/cryptodev_lib.rst    |  3 +-
 doc/guides/rel_notes/release_20_11.rst     |  3 ++
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c   | 18 +++++-----
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c |  9 +++--
 lib/librte_cryptodev/rte_crypto_sym.h      | 40 ++++++++++++++++------
 lib/librte_ipsec/esp_inb.c                 | 12 +++----
 lib/librte_ipsec/esp_outb.c                | 12 +++----
 lib/librte_ipsec/misc.h                    |  6 ++--
 9 files changed, 79 insertions(+), 49 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ac2a36bc2..62a265520 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -151,11 +151,11 @@ static void
 process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
 {
 	int32_t n, st;
-	void *iv;
 	struct rte_crypto_sym_op *sop;
 	union rte_crypto_sym_ofs ofs;
 	struct rte_crypto_sgl sgl;
 	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_va_iova_ptr iv_ptr, aad_ptr, digest_ptr;
 	struct rte_crypto_vec vec[UINT8_MAX];
 
 	sop = op->sym;
@@ -171,13 +171,17 @@ process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
 	sgl.vec = vec;
 	sgl.num = n;
 	symvec.sgl = &sgl;
-	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
-	symvec.iv = &iv;
-	symvec.aad = (void **)&sop->aead.aad.data;
-	symvec.digest = (void **)&sop->aead.digest.data;
+	symvec.iv = &iv_ptr;
+	symvec.digest = &digest_ptr;
+	symvec.aad = &aad_ptr;
 	symvec.status = &st;
 	symvec.num = 1;
 
+	/* for CPU crypto the IOVA address is not required */
+	iv_ptr.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	digest_ptr.va = (void *)sop->aead.digest.data;
+	aad_ptr.va = (void *)sop->aead.aad.data;
+
 	ofs.raw = 0;
 
 	n = rte_cryptodev_sym_cpu_crypto_process(dev_id, sop->session, ofs,
@@ -193,11 +197,11 @@ static void
 process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
 {
 	int32_t n, st;
-	void *iv;
 	struct rte_crypto_sym_op *sop;
 	union rte_crypto_sym_ofs ofs;
 	struct rte_crypto_sgl sgl;
 	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_va_iova_ptr iv_ptr, digest_ptr;
 	struct rte_crypto_vec vec[UINT8_MAX];
 
 	sop = op->sym;
@@ -213,13 +217,14 @@ process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
 	sgl.vec = vec;
 	sgl.num = n;
 	symvec.sgl = &sgl;
-	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
-	symvec.iv = &iv;
-	symvec.aad = (void **)&sop->aead.aad.data;
-	symvec.digest = (void **)&sop->auth.digest.data;
+	symvec.iv = &iv_ptr;
+	symvec.digest = &digest_ptr;
 	symvec.status = &st;
 	symvec.num = 1;
 
+	iv_ptr.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	digest_ptr.va = (void *)sop->auth.digest.data;
+
 	ofs.raw = 0;
 	ofs.ofs.cipher.head = sop->cipher.data.offset - sop->auth.data.offset;
 	ofs.ofs.cipher.tail = (sop->auth.data.offset + sop->auth.data.length) -
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c14f750fa..e7ba35c2d 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -620,7 +620,8 @@ operation descriptor (``struct rte_crypto_sym_vec``) containing:
   descriptors of performed operations (``struct rte_crypto_sgl``). Each instance
   of ``struct rte_crypto_sgl`` consists of a number of segments and a pointer to
   an array of segment descriptors ``struct rte_crypto_vec``;
-- pointers to arrays of size ``num`` containing IV, AAD and digest information,
+- pointers to arrays of size ``num`` containing IV, AAD and digest information
+  in the ``cpu_crypto`` sub-structure,
 - pointer to an array of size ``num`` where status information will be stored
   for each operation.
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 8b911488c..2973b2a33 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -302,6 +302,9 @@ API Changes
   ``rte_fpga_lte_fec_configure`` and structure ``fpga_lte_fec_conf`` to
   ``rte_fpga_lte_fec_conf``.
 
+* The structure ``rte_crypto_sym_vec`` is updated to support both
+  cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
+
 
 ABI Changes
 -----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1d2a0ce00..973b61bd6 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -464,9 +464,10 @@ aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
 		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+			&vec->sgl[i], vec->iv[i].va,
+			vec->aad[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -482,9 +483,10 @@ aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
 		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+			&vec->sgl[i], vec->iv[i].va,
+			vec->aad[i].va);
 		 vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -505,9 +507,9 @@ aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
 		}
 
 		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i]);
+			&vec->sgl[i], vec->iv[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -528,9 +530,9 @@ aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
 		}
 
 		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i]);
+			&vec->sgl[i], vec->iv[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 34a39ca99..39f90f537 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -1877,7 +1877,7 @@ generate_sync_dgst(struct rte_crypto_sym_vec *vec,
 
 	for (i = 0, k = 0; i != vec->num; i++) {
 		if (vec->status[i] == 0) {
-			memcpy(vec->digest[i], dgst[i], len);
+			memcpy(vec->digest[i].va, dgst[i], len);
 			k++;
 		}
 	}
@@ -1893,7 +1893,7 @@ verify_sync_dgst(struct rte_crypto_sym_vec *vec,
 
 	for (i = 0, k = 0; i != vec->num; i++) {
 		if (vec->status[i] == 0) {
-			if (memcmp(vec->digest[i], dgst[i], len) != 0)
+			if (memcmp(vec->digest[i].va, dgst[i], len) != 0)
 				vec->status[i] = EBADMSG;
 			else
 				k++;
@@ -1956,9 +1956,8 @@ aesni_mb_cpu_crypto_process_bulk(struct rte_cryptodev *dev,
 		}
 
 		/* Submit job for processing */
-		set_cpu_mb_job_params(job, s, sofs, buf, len,
-			vec->iv[i], vec->aad[i], tmp_dgst[i],
-			&vec->status[i]);
+		set_cpu_mb_job_params(job, s, sofs, buf, len, vec->iv[i].va,
+			vec->aad[i].va, tmp_dgst[i], &vec->status[i]);
 		job = submit_sync_job(mb_mgr);
 		j++;
 
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..e1f23d303 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -51,26 +51,44 @@ struct rte_crypto_sgl {
 };
 
 /**
- * Synchronous operation descriptor.
- * Supposed to be used with CPU crypto API call.
+ * Crypto virtual and IOVA address descriptor, used to describe cryptographic
+ * data buffer without the length information. The length information is
+ * normally predefined during session creation.
+ */
+struct rte_crypto_va_iova_ptr {
+	void *va;
+	rte_iova_t iova;
+};
+
+/**
+ * Raw data operation descriptor.
+ * Supposed to be used with synchronous CPU crypto API call or asynchronous
+ * RAW data path API call.
  */
 struct rte_crypto_sym_vec {
+	/** number of operations to perform */
+	uint32_t num;
 	/** array of SGL vectors */
 	struct rte_crypto_sgl *sgl;
-	/** array of pointers to IV */
-	void **iv;
-	/** array of pointers to AAD */
-	void **aad;
+	/** array of pointers to cipher IV */
+	struct rte_crypto_va_iova_ptr *iv;
 	/** array of pointers to digest */
-	void **digest;
+	struct rte_crypto_va_iova_ptr *digest;
+
+	__extension__
+	union {
+		/** array of pointers to auth IV, used for chain operation */
+		struct rte_crypto_va_iova_ptr *auth_iv;
+		/** array of pointers to AAD, used for AEAD operation */
+		struct rte_crypto_va_iova_ptr *aad;
+	};
+
 	/**
 	 * array of statuses for each operation:
-	 *  - 0 on success
-	 *  - errno on error
+	 * - 0 on success
+	 * - errno on error
 	 */
 	int32_t *status;
-	/** number of operations to perform */
-	uint32_t num;
 };
 
 /**
diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c
index 96eec0131..2b1df6a03 100644
--- a/lib/librte_ipsec/esp_inb.c
+++ b/lib/librte_ipsec/esp_inb.c
@@ -693,9 +693,9 @@ cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
 	struct rte_ipsec_sa *sa;
 	struct replay_sqn *rsn;
 	union sym_op_data icv;
-	void *iv[num];
-	void *aad[num];
-	void *dgst[num];
+	struct rte_crypto_va_iova_ptr iv[num];
+	struct rte_crypto_va_iova_ptr aad[num];
+	struct rte_crypto_va_iova_ptr dgst[num];
 	uint32_t dr[num];
 	uint32_t l4ofs[num];
 	uint32_t clen[num];
@@ -720,9 +720,9 @@ cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
 				l4ofs + k, rc, ivbuf[k]);
 
 			/* fill iv, digest and aad */
-			iv[k] = ivbuf[k];
-			aad[k] = icv.va + sa->icv_len;
-			dgst[k++] = icv.va;
+			iv[k].va = ivbuf[k];
+			aad[k].va = icv.va + sa->icv_len;
+			dgst[k++].va = icv.va;
 		} else {
 			dr[i - k] = i;
 			rte_errno = -rc;
diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c
index fb9d5864c..1e181cf2c 100644
--- a/lib/librte_ipsec/esp_outb.c
+++ b/lib/librte_ipsec/esp_outb.c
@@ -449,9 +449,9 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
 	uint32_t i, k, n;
 	uint32_t l2, l3;
 	union sym_op_data icv;
-	void *iv[num];
-	void *aad[num];
-	void *dgst[num];
+	struct rte_crypto_va_iova_ptr iv[num];
+	struct rte_crypto_va_iova_ptr aad[num];
+	struct rte_crypto_va_iova_ptr dgst[num];
 	uint32_t dr[num];
 	uint32_t l4ofs[num];
 	uint32_t clen[num];
@@ -488,9 +488,9 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
 				ivbuf[k]);
 
 			/* fill iv, digest and aad */
-			iv[k] = ivbuf[k];
-			aad[k] = icv.va + sa->icv_len;
-			dgst[k++] = icv.va;
+			iv[k].va = ivbuf[k];
+			aad[k].va = icv.va + sa->icv_len;
+			dgst[k++].va = icv.va;
 		} else {
 			dr[i - k] = i;
 			rte_errno = -rc;
diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h
index 1b543ed87..79b9a2076 100644
--- a/lib/librte_ipsec/misc.h
+++ b/lib/librte_ipsec/misc.h
@@ -112,7 +112,9 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs,
 static inline void
 cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 	union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[],
-	void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[],
+	struct rte_crypto_va_iova_ptr iv[],
+	struct rte_crypto_va_iova_ptr aad[],
+	struct rte_crypto_va_iova_ptr dgst[], uint32_t l4ofs[],
 	uint32_t clen[], uint32_t num)
 {
 	uint32_t i, j, n;
@@ -136,8 +138,8 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 			/* fill the request structure */
 			symvec.sgl = &vecpkt[j];
 			symvec.iv = &iv[j];
-			symvec.aad = &aad[j];
 			symvec.digest = &dgst[j];
+			symvec.aad = &aad[j];
 			symvec.status = &st[j];
 			symvec.num = i - j;
 
-- 
2.20.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [dpdk-dev v12 1/4] cryptodev: change crypto symmetric vector structure
  @ 2020-10-11  0:32  3%     ` Fan Zhang
    1 sibling, 0 replies; 200+ results
From: Fan Zhang @ 2020-10-11  0:32 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, Fan Zhang, Adam Dybkowski, Konstantin Ananyev

This patch updates ``rte_crypto_sym_vec`` structure to add
support for both cpu_crypto synchrounous operation and
asynchronous raw data-path APIs. The patch also includes
AESNI-MB and AESNI-GCM PMD changes, unit test changes and
documentation updates.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test/test_cryptodev.c                  | 25 ++++++++------
 doc/guides/prog_guide/cryptodev_lib.rst    |  3 +-
 doc/guides/rel_notes/release_20_11.rst     |  3 ++
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c   | 18 +++++-----
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c |  9 +++--
 lib/librte_cryptodev/rte_crypto_sym.h      | 40 ++++++++++++++++------
 lib/librte_ipsec/esp_inb.c                 | 12 +++----
 lib/librte_ipsec/esp_outb.c                | 12 +++----
 lib/librte_ipsec/misc.h                    |  6 ++--
 9 files changed, 79 insertions(+), 49 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ac2a36bc2..62a265520 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -151,11 +151,11 @@ static void
 process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
 {
 	int32_t n, st;
-	void *iv;
 	struct rte_crypto_sym_op *sop;
 	union rte_crypto_sym_ofs ofs;
 	struct rte_crypto_sgl sgl;
 	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_va_iova_ptr iv_ptr, aad_ptr, digest_ptr;
 	struct rte_crypto_vec vec[UINT8_MAX];
 
 	sop = op->sym;
@@ -171,13 +171,17 @@ process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
 	sgl.vec = vec;
 	sgl.num = n;
 	symvec.sgl = &sgl;
-	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
-	symvec.iv = &iv;
-	symvec.aad = (void **)&sop->aead.aad.data;
-	symvec.digest = (void **)&sop->aead.digest.data;
+	symvec.iv = &iv_ptr;
+	symvec.digest = &digest_ptr;
+	symvec.aad = &aad_ptr;
 	symvec.status = &st;
 	symvec.num = 1;
 
+	/* for CPU crypto the IOVA address is not required */
+	iv_ptr.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	digest_ptr.va = (void *)sop->aead.digest.data;
+	aad_ptr.va = (void *)sop->aead.aad.data;
+
 	ofs.raw = 0;
 
 	n = rte_cryptodev_sym_cpu_crypto_process(dev_id, sop->session, ofs,
@@ -193,11 +197,11 @@ static void
 process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
 {
 	int32_t n, st;
-	void *iv;
 	struct rte_crypto_sym_op *sop;
 	union rte_crypto_sym_ofs ofs;
 	struct rte_crypto_sgl sgl;
 	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_va_iova_ptr iv_ptr, digest_ptr;
 	struct rte_crypto_vec vec[UINT8_MAX];
 
 	sop = op->sym;
@@ -213,13 +217,14 @@ process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
 	sgl.vec = vec;
 	sgl.num = n;
 	symvec.sgl = &sgl;
-	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
-	symvec.iv = &iv;
-	symvec.aad = (void **)&sop->aead.aad.data;
-	symvec.digest = (void **)&sop->auth.digest.data;
+	symvec.iv = &iv_ptr;
+	symvec.digest = &digest_ptr;
 	symvec.status = &st;
 	symvec.num = 1;
 
+	iv_ptr.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	digest_ptr.va = (void *)sop->auth.digest.data;
+
 	ofs.raw = 0;
 	ofs.ofs.cipher.head = sop->cipher.data.offset - sop->auth.data.offset;
 	ofs.ofs.cipher.tail = (sop->auth.data.offset + sop->auth.data.length) -
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c14f750fa..e7ba35c2d 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -620,7 +620,8 @@ operation descriptor (``struct rte_crypto_sym_vec``) containing:
   descriptors of performed operations (``struct rte_crypto_sgl``). Each instance
   of ``struct rte_crypto_sgl`` consists of a number of segments and a pointer to
   an array of segment descriptors ``struct rte_crypto_vec``;
-- pointers to arrays of size ``num`` containing IV, AAD and digest information,
+- pointers to arrays of size ``num`` containing IV, AAD and digest information
+  in the ``cpu_crypto`` sub-structure,
 - pointer to an array of size ``num`` where status information will be stored
   for each operation.
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 8b911488c..2973b2a33 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -302,6 +302,9 @@ API Changes
   ``rte_fpga_lte_fec_configure`` and structure ``fpga_lte_fec_conf`` to
   ``rte_fpga_lte_fec_conf``.
 
+* The structure ``rte_crypto_sym_vec`` is updated to support both
+  cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
+
 
 ABI Changes
 -----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1d2a0ce00..973b61bd6 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -464,9 +464,10 @@ aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
 		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+			&vec->sgl[i], vec->iv[i].va,
+			vec->aad[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -482,9 +483,10 @@ aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
 		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+			&vec->sgl[i], vec->iv[i].va,
+			vec->aad[i].va);
 		 vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -505,9 +507,9 @@ aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
 		}
 
 		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i]);
+			&vec->sgl[i], vec->iv[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -528,9 +530,9 @@ aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
 		}
 
 		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i]);
+			&vec->sgl[i], vec->iv[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 34a39ca99..39f90f537 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -1877,7 +1877,7 @@ generate_sync_dgst(struct rte_crypto_sym_vec *vec,
 
 	for (i = 0, k = 0; i != vec->num; i++) {
 		if (vec->status[i] == 0) {
-			memcpy(vec->digest[i], dgst[i], len);
+			memcpy(vec->digest[i].va, dgst[i], len);
 			k++;
 		}
 	}
@@ -1893,7 +1893,7 @@ verify_sync_dgst(struct rte_crypto_sym_vec *vec,
 
 	for (i = 0, k = 0; i != vec->num; i++) {
 		if (vec->status[i] == 0) {
-			if (memcmp(vec->digest[i], dgst[i], len) != 0)
+			if (memcmp(vec->digest[i].va, dgst[i], len) != 0)
 				vec->status[i] = EBADMSG;
 			else
 				k++;
@@ -1956,9 +1956,8 @@ aesni_mb_cpu_crypto_process_bulk(struct rte_cryptodev *dev,
 		}
 
 		/* Submit job for processing */
-		set_cpu_mb_job_params(job, s, sofs, buf, len,
-			vec->iv[i], vec->aad[i], tmp_dgst[i],
-			&vec->status[i]);
+		set_cpu_mb_job_params(job, s, sofs, buf, len, vec->iv[i].va,
+			vec->aad[i].va, tmp_dgst[i], &vec->status[i]);
 		job = submit_sync_job(mb_mgr);
 		j++;
 
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..e1f23d303 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -51,26 +51,44 @@ struct rte_crypto_sgl {
 };
 
 /**
- * Synchronous operation descriptor.
- * Supposed to be used with CPU crypto API call.
+ * Crypto virtual and IOVA address descriptor, used to describe cryptographic
+ * data buffer without the length information. The length information is
+ * normally predefined during session creation.
+ */
+struct rte_crypto_va_iova_ptr {
+	void *va;
+	rte_iova_t iova;
+};
+
+/**
+ * Raw data operation descriptor.
+ * Supposed to be used with synchronous CPU crypto API call or asynchronous
+ * RAW data path API call.
  */
 struct rte_crypto_sym_vec {
+	/** number of operations to perform */
+	uint32_t num;
 	/** array of SGL vectors */
 	struct rte_crypto_sgl *sgl;
-	/** array of pointers to IV */
-	void **iv;
-	/** array of pointers to AAD */
-	void **aad;
+	/** array of pointers to cipher IV */
+	struct rte_crypto_va_iova_ptr *iv;
 	/** array of pointers to digest */
-	void **digest;
+	struct rte_crypto_va_iova_ptr *digest;
+
+	__extension__
+	union {
+		/** array of pointers to auth IV, used for chain operation */
+		struct rte_crypto_va_iova_ptr *auth_iv;
+		/** array of pointers to AAD, used for AEAD operation */
+		struct rte_crypto_va_iova_ptr *aad;
+	};
+
 	/**
 	 * array of statuses for each operation:
-	 *  - 0 on success
-	 *  - errno on error
+	 * - 0 on success
+	 * - errno on error
 	 */
 	int32_t *status;
-	/** number of operations to perform */
-	uint32_t num;
 };
 
 /**
diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c
index 96eec0131..2b1df6a03 100644
--- a/lib/librte_ipsec/esp_inb.c
+++ b/lib/librte_ipsec/esp_inb.c
@@ -693,9 +693,9 @@ cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
 	struct rte_ipsec_sa *sa;
 	struct replay_sqn *rsn;
 	union sym_op_data icv;
-	void *iv[num];
-	void *aad[num];
-	void *dgst[num];
+	struct rte_crypto_va_iova_ptr iv[num];
+	struct rte_crypto_va_iova_ptr aad[num];
+	struct rte_crypto_va_iova_ptr dgst[num];
 	uint32_t dr[num];
 	uint32_t l4ofs[num];
 	uint32_t clen[num];
@@ -720,9 +720,9 @@ cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
 				l4ofs + k, rc, ivbuf[k]);
 
 			/* fill iv, digest and aad */
-			iv[k] = ivbuf[k];
-			aad[k] = icv.va + sa->icv_len;
-			dgst[k++] = icv.va;
+			iv[k].va = ivbuf[k];
+			aad[k].va = icv.va + sa->icv_len;
+			dgst[k++].va = icv.va;
 		} else {
 			dr[i - k] = i;
 			rte_errno = -rc;
diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c
index fb9d5864c..1e181cf2c 100644
--- a/lib/librte_ipsec/esp_outb.c
+++ b/lib/librte_ipsec/esp_outb.c
@@ -449,9 +449,9 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
 	uint32_t i, k, n;
 	uint32_t l2, l3;
 	union sym_op_data icv;
-	void *iv[num];
-	void *aad[num];
-	void *dgst[num];
+	struct rte_crypto_va_iova_ptr iv[num];
+	struct rte_crypto_va_iova_ptr aad[num];
+	struct rte_crypto_va_iova_ptr dgst[num];
 	uint32_t dr[num];
 	uint32_t l4ofs[num];
 	uint32_t clen[num];
@@ -488,9 +488,9 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
 				ivbuf[k]);
 
 			/* fill iv, digest and aad */
-			iv[k] = ivbuf[k];
-			aad[k] = icv.va + sa->icv_len;
-			dgst[k++] = icv.va;
+			iv[k].va = ivbuf[k];
+			aad[k].va = icv.va + sa->icv_len;
+			dgst[k++].va = icv.va;
 		} else {
 			dr[i - k] = i;
 			rte_errno = -rc;
diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h
index 1b543ed87..79b9a2076 100644
--- a/lib/librte_ipsec/misc.h
+++ b/lib/librte_ipsec/misc.h
@@ -112,7 +112,9 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs,
 static inline void
 cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 	union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[],
-	void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[],
+	struct rte_crypto_va_iova_ptr iv[],
+	struct rte_crypto_va_iova_ptr aad[],
+	struct rte_crypto_va_iova_ptr dgst[], uint32_t l4ofs[],
 	uint32_t clen[], uint32_t num)
 {
 	uint32_t i, j, n;
@@ -136,8 +138,8 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 			/* fill the request structure */
 			symvec.sgl = &vecpkt[j];
 			symvec.iv = &iv[j];
-			symvec.aad = &aad[j];
 			symvec.digest = &dgst[j];
+			symvec.aad = &aad[j];
 			symvec.status = &st[j];
 			symvec.num = i - j;
 
-- 
2.20.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2] security: update session create API
    @ 2020-10-10 22:11  2% ` Akhil Goyal
  2020-10-13  2:12  0%   ` Lukasz Wojciechowski
  2020-10-14 18:56  2%   ` [dpdk-dev] [PATCH v3] " Akhil Goyal
  1 sibling, 2 replies; 200+ results
From: Akhil Goyal @ 2020-10-10 22:11 UTC (permalink / raw)
  To: dev
  Cc: thomas, mdr, anoobj, hemant.agrawal, konstantin.ananyev,
	declan.doherty, radu.nicolau, david.coyle, l.wojciechow,
	Akhil Goyal

The API ``rte_security_session_create`` takes only single
mempool for session and session private data. So the
application need to create mempool for twice the number of
sessions needed and will also lead to wastage of memory as
session private data need more memory compared to session.
Hence the API is modified to take two mempool pointers
- one for session and one for private data.
This is very similar to crypto based session create APIs.

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---

Changes in V2:
incorporated comments from Lukasz and David.

 app/test-crypto-perf/cperf_ops.c       |  4 +-
 app/test-crypto-perf/main.c            | 12 +++--
 app/test/test_cryptodev.c              | 18 ++++++--
 app/test/test_ipsec.c                  |  3 +-
 app/test/test_security.c               | 61 ++++++++++++++++++++------
 doc/guides/prog_guide/rte_security.rst |  8 +++-
 doc/guides/rel_notes/deprecation.rst   |  7 ---
 doc/guides/rel_notes/release_20_11.rst |  6 +++
 examples/ipsec-secgw/ipsec-secgw.c     | 12 +----
 examples/ipsec-secgw/ipsec.c           |  9 ++--
 lib/librte_security/rte_security.c     |  7 ++-
 lib/librte_security/rte_security.h     |  4 +-
 12 files changed, 102 insertions(+), 49 deletions(-)

diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 3da835a9c..3a64a2c34 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -621,7 +621,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create security session */
 		return (void *)rte_security_session_create(ctx,
-					&sess_conf, sess_mp);
+					&sess_conf, sess_mp, priv_mp);
 	}
 	if (options->op_type == CPERF_DOCSIS) {
 		enum rte_security_docsis_direction direction;
@@ -664,7 +664,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create security session */
 		return (void *)rte_security_session_create(ctx,
-					&sess_conf, priv_mp);
+					&sess_conf, sess_mp, priv_mp);
 	}
 #endif
 	sess = rte_cryptodev_sym_session_create(sess_mp);
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 62ae6048b..53864ffdd 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -156,7 +156,14 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
 		if (sess_size > max_sess_size)
 			max_sess_size = sess_size;
 	}
-
+#ifdef RTE_LIBRTE_SECURITY
+	for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) {
+		sess_size = rte_security_session_get_size(
+				rte_cryptodev_get_sec_ctx(cdev_id));
+		if (sess_size > max_sess_size)
+			max_sess_size = sess_size;
+	}
+#endif
 	/*
 	 * Calculate number of needed queue pairs, based on the amount
 	 * of available number of logical cores and crypto devices.
@@ -247,8 +254,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
 				opts->nb_qps * nb_slaves;
 #endif
 		} else
-			sessions_needed = enabled_cdev_count *
-						opts->nb_qps * 2;
+			sessions_needed = enabled_cdev_count * opts->nb_qps;
 
 		/*
 		 * A single session is required per queue pair
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ac2a36bc2..4bd9d8aff 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -553,9 +553,15 @@ testsuite_setup(void)
 	unsigned int session_size =
 		rte_cryptodev_sym_get_private_session_size(dev_id);
 
+#ifdef RTE_LIBRTE_SECURITY
+	unsigned int security_session_size = rte_security_session_get_size(
+			rte_cryptodev_get_sec_ctx(dev_id));
+
+	if (session_size < security_session_size)
+			session_size = security_session_size;
+#endif
 	/*
-	 * Create mempool with maximum number of sessions * 2,
-	 * to include the session headers
+	 * Create mempool with maximum number of sessions.
 	 */
 	if (info.sym.max_nb_sessions != 0 &&
 			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
@@ -7219,7 +7225,8 @@ test_pdcp_proto(int i, int oop,
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx,
-				&sess_conf, ts_params->session_priv_mpool);
+				&sess_conf, ts_params->session_mpool,
+				ts_params->session_priv_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s()-%d line %d failed %s: ",
@@ -7479,7 +7486,8 @@ test_pdcp_proto_SGL(int i, int oop,
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx,
-				&sess_conf, ts_params->session_priv_mpool);
+				&sess_conf, ts_params->session_mpool,
+				ts_params->session_priv_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s()-%d line %d failed %s: ",
@@ -7836,6 +7844,7 @@ test_docsis_proto_uplink(int i, struct docsis_test_data *d_td)
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
+					ts_params->session_mpool,
 					ts_params->session_priv_mpool);
 
 	if (!ut_params->sec_session) {
@@ -8011,6 +8020,7 @@ test_docsis_proto_downlink(int i, struct docsis_test_data *d_td)
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
+					ts_params->session_mpool,
 					ts_params->session_priv_mpool);
 
 	if (!ut_params->sec_session) {
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index 79d00d7e0..9ad07a179 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -632,7 +632,8 @@ create_dummy_sec_session(struct ipsec_unitest_params *ut,
 	static struct rte_security_session_conf conf;
 
 	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
-					&conf, qp->mp_session_private);
+					&conf, qp->mp_session,
+					qp->mp_session_private);
 
 	if (ut->ss[j].security.ses == NULL)
 		return -ENOMEM;
diff --git a/app/test/test_security.c b/app/test/test_security.c
index 77fd5adc6..bf6a3e9de 100644
--- a/app/test/test_security.c
+++ b/app/test/test_security.c
@@ -237,24 +237,25 @@ static struct mock_session_create_data {
 	struct rte_security_session_conf *conf;
 	struct rte_security_session *sess;
 	struct rte_mempool *mp;
+	struct rte_mempool *priv_mp;
 
 	int ret;
 
 	int called;
 	int failed;
-} mock_session_create_exp = {NULL, NULL, NULL, NULL, 0, 0, 0};
+} mock_session_create_exp = {NULL, NULL, NULL, NULL, NULL, 0, 0, 0};
 
 static int
 mock_session_create(void *device,
 		struct rte_security_session_conf *conf,
 		struct rte_security_session *sess,
-		struct rte_mempool *mp)
+		struct rte_mempool *priv_mp)
 {
 	mock_session_create_exp.called++;
 
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, device);
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, conf);
-	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, mp);
+	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
 
 	mock_session_create_exp.sess = sess;
 
@@ -502,6 +503,7 @@ struct rte_security_ops mock_ops = {
  */
 static struct security_testsuite_params {
 	struct rte_mempool *session_mpool;
+	struct rte_mempool *session_priv_mpool;
 } testsuite_params = { NULL };
 
 /**
@@ -524,7 +526,8 @@ static struct security_unittest_params {
 	.sess = NULL,
 };
 
-#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestsMempoolName"
+#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestMp"
+#define SECURITY_TEST_PRIV_MEMPOOL_NAME "SecurityTestPrivMp"
 #define SECURITY_TEST_MEMPOOL_SIZE 15
 #define SECURITY_TEST_SESSION_OBJECT_SIZE sizeof(struct rte_security_session)
 
@@ -545,6 +548,22 @@ testsuite_setup(void)
 			SOCKET_ID_ANY, 0);
 	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
 			"Cannot create mempool %s\n", rte_strerror(rte_errno));
+
+	ts_params->session_priv_mpool = rte_mempool_create(
+			SECURITY_TEST_PRIV_MEMPOOL_NAME,
+			SECURITY_TEST_MEMPOOL_SIZE,
+			rte_security_session_get_size(&unittest_params.ctx),
+			0, 0, NULL, NULL, NULL, NULL,
+			SOCKET_ID_ANY, 0);
+	if (ts_params->session_priv_mpool == NULL) {
+		printf("TestCase %s() line %d failed (null): "
+				"Cannot create priv mempool %s\n",
+				__func__, __LINE__, rte_strerror(rte_errno));
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+		return TEST_FAILED;
+	}
+
 	return TEST_SUCCESS;
 }
 
@@ -559,6 +578,10 @@ testsuite_teardown(void)
 		rte_mempool_free(ts_params->session_mpool);
 		ts_params->session_mpool = NULL;
 	}
+	if (ts_params->session_priv_mpool) {
+		rte_mempool_free(ts_params->session_priv_mpool);
+		ts_params->session_priv_mpool = NULL;
+	}
 }
 
 /**
@@ -659,7 +682,8 @@ ut_setup_with_session(void)
 	mock_session_create_exp.ret = 0;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
 			sess);
 	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -701,7 +725,8 @@ test_session_create_inv_context(void)
 	struct rte_security_session *sess;
 
 	sess = rte_security_session_create(NULL, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
@@ -725,7 +750,8 @@ test_session_create_inv_context_ops(void)
 	ut_params->ctx.ops = NULL;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
@@ -749,7 +775,8 @@ test_session_create_inv_context_ops_fun(void)
 	ut_params->ctx.ops = &empty_ops;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
@@ -770,7 +797,8 @@ test_session_create_inv_configuration(void)
 	struct rte_security_session *sess;
 
 	sess = rte_security_session_create(&ut_params->ctx, NULL,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
@@ -781,7 +809,7 @@ test_session_create_inv_configuration(void)
 }
 
 /**
- * Test execution of rte_security_session_create with NULL mp parameter
+ * Test execution of rte_security_session_create with NULL mempools
  */
 static int
 test_session_create_inv_mempool(void)
@@ -790,7 +818,7 @@ test_session_create_inv_mempool(void)
 	struct rte_security_session *sess;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			NULL);
+			NULL, NULL);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
@@ -824,7 +852,8 @@ test_session_create_mempool_empty(void)
 	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
@@ -853,10 +882,12 @@ test_session_create_ops_failure(void)
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
+	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = -1;	/* Return failure status. */
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
@@ -879,10 +910,12 @@ test_session_create_success(void)
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
+	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = 0;	/* Return success status. */
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool);
+			ts_params->session_mpool,
+			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
 			sess);
 	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index 127da2e4f..fdb469d5f 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -533,8 +533,12 @@ and this allows further acceleration of the offload of Crypto workloads.
 
 The Security framework provides APIs to create and free sessions for crypto/ethernet
 devices, where sessions are mempool objects. It is the application's responsibility
-to create and manage the session mempools. The mempool object size should be able to
-accommodate the driver's private data of security session.
+to create and manage two session mempools - one for session and other for session
+private data. The private session data mempool object size should be able to
+accommodate the driver's private data of security session. The application can get
+the size of session private data using API ``rte_security_session_get_size``.
+And the session mempool object size should be enough to accomodate
+``rte_security_session``.
 
 Once the session mempools have been created, ``rte_security_session_create()``
 is used to allocate and initialize a session for the required crypto/ethernet device.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 52f413e21..d956a76e7 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -164,13 +164,6 @@ Deprecation Notices
   following the IPv6 header, as proposed in RFC
   https://mails.dpdk.org/archives/dev/2020-August/177257.html.
 
-* security: The API ``rte_security_session_create`` takes only single mempool
-  for session and session private data. So the application need to create
-  mempool for twice the number of sessions needed and will also lead to
-  wastage of memory as session private data need more memory compared to session.
-  Hence the API will be modified to take two mempool pointers - one for session
-  and one for private data.
-
 * cryptodev: ``RTE_CRYPTO_AEAD_LIST_END`` from ``enum rte_crypto_aead_algorithm``,
   ``RTE_CRYPTO_CIPHER_LIST_END`` from ``enum rte_crypto_cipher_algorithm`` and
   ``RTE_CRYPTO_AUTH_LIST_END`` from ``enum rte_crypto_auth_algorithm``
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c34ab5493..68b82ae4e 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -307,6 +307,12 @@ API Changes
   ``rte_fpga_lte_fec_configure`` and structure ``fpga_lte_fec_conf`` to
   ``rte_fpga_lte_fec_conf``.
 
+* security: The API ``rte_security_session_create`` is updated to take two
+  mempool objects one for session and other for session private data.
+  So the application need to create two mempools and get the size of session
+  private data using API ``rte_security_session_get_size`` for private session
+  mempool.
+
 
 ABI Changes
 -----------
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 60132c4bd..2326089bb 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2348,12 +2348,8 @@ session_pool_init(struct socket_ctx *ctx, int32_t socket_id, size_t sess_sz)
 
 	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
 			"sess_mp_%u", socket_id);
-	/*
-	 * Doubled due to rte_security_session_create() uses one mempool for
-	 * session and for session private data.
-	 */
 	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
-		rte_lcore_count()) * 2;
+		rte_lcore_count());
 	sess_mp = rte_cryptodev_sym_session_pool_create(
 			mp_name, nb_sess, sess_sz, CDEV_MP_CACHE_SZ, 0,
 			socket_id);
@@ -2376,12 +2372,8 @@ session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
 
 	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
 			"sess_mp_priv_%u", socket_id);
-	/*
-	 * Doubled due to rte_security_session_create() uses one mempool for
-	 * session and for session private data.
-	 */
 	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
-		rte_lcore_count()) * 2;
+		rte_lcore_count());
 	sess_mp = rte_mempool_create(mp_name,
 			nb_sess,
 			sess_sz,
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 01faa7ac7..6baeeb342 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -117,7 +117,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			set_ipsec_conf(sa, &(sess_conf.ipsec));
 
 			ips->security.ses = rte_security_session_create(ctx,
-					&sess_conf, ipsec_ctx->session_priv_pool);
+					&sess_conf, ipsec_ctx->session_pool,
+					ipsec_ctx->session_priv_pool);
 			if (ips->security.ses == NULL) {
 				RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
@@ -198,7 +199,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
 		}
 
 		ips->security.ses = rte_security_session_create(sec_ctx,
-				&sess_conf, skt_ctx->session_pool);
+				&sess_conf, skt_ctx->session_pool,
+				skt_ctx->session_priv_pool);
 		if (ips->security.ses == NULL) {
 			RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
@@ -378,7 +380,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
 		sess_conf.userdata = (void *) sa;
 
 		ips->security.ses = rte_security_session_create(sec_ctx,
-					&sess_conf, skt_ctx->session_pool);
+					&sess_conf, skt_ctx->session_pool,
+					skt_ctx->session_priv_pool);
 		if (ips->security.ses == NULL) {
 			RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
index 515c29e04..ee4666026 100644
--- a/lib/librte_security/rte_security.c
+++ b/lib/librte_security/rte_security.c
@@ -26,18 +26,21 @@
 struct rte_security_session *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
-			    struct rte_mempool *mp)
+			    struct rte_mempool *mp,
+			    struct rte_mempool *priv_mp)
 {
 	struct rte_security_session *sess = NULL;
 
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_create, NULL, NULL);
 	RTE_PTR_OR_ERR_RET(conf, NULL);
 	RTE_PTR_OR_ERR_RET(mp, NULL);
+	RTE_PTR_OR_ERR_RET(priv_mp, NULL);
 
 	if (rte_mempool_get(mp, (void **)&sess))
 		return NULL;
 
-	if (instance->ops->session_create(instance->device, conf, sess, mp)) {
+	if (instance->ops->session_create(instance->device, conf,
+				sess, priv_mp)) {
 		rte_mempool_put(mp, (void *)sess);
 		return NULL;
 	}
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 16839e539..1710cdd6a 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -386,6 +386,7 @@ struct rte_security_session {
  * @param   instance	security instance
  * @param   conf	session configuration parameters
  * @param   mp		mempool to allocate session objects from
+ * @param   priv_mp	mempool to allocate session private data objects from
  * @return
  *  - On success, pointer to session
  *  - On failure, NULL
@@ -393,7 +394,8 @@ struct rte_security_session {
 struct rte_security_session *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
-			    struct rte_mempool *mp);
+			    struct rte_mempool *mp,
+			    struct rte_mempool *priv_mp);
 
 /**
  * Update security session as specified by the session configuration
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH] security: update session create API
  @ 2020-10-10 22:06  0%   ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2020-10-10 22:06 UTC (permalink / raw)
  To: Coyle, David, dev, thomas, mdr, anoobj
  Cc: Hemant Agrawal, Ananyev, Konstantin, Doherty, Declan, Nicolau, Radu

Hi David,
> Hi Akhil
> 
> > -----Original Message-----
> > From: akhil.goyal@nxp.com <akhil.goyal@nxp.com>
> > Sent: Thursday, September 3, 2020 9:10 PM
> 
> <snip>
> 
> > diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index
> > 70bf6fe2c..6d7da1408 100644
> > --- a/app/test/test_cryptodev.c
> > +++ b/app/test/test_cryptodev.c
> > @@ -7219,7 +7219,8 @@ test_pdcp_proto(int i, int oop,
> >
> >  	/* Create security session */
> >  	ut_params->sec_session = rte_security_session_create(ctx,
> > -				&sess_conf, ts_params-
> > >session_priv_mpool);
> > +				&sess_conf, ts_params->session_mpool,
> > +				ts_params->session_priv_mpool);
> 
> [DC] ts_params->session_mpool is a cryptodev sym session pool. The
> assumption then in these security tests is that
> security sessions are smaller than cryptodev sym sessions. This is currently true,
> but may not always be.
> 
> There should possibly be a new mempool created for security sessions.
> Or at least an assert somewhere to check a security session is smaller than a
> cryptodev sym session, so that this doesn't
> catch someone out in the future if security session grows in size.
> 
> The same comment applies to the crypto-perf-test and test_ipsec too

Fixed for test and crypto-perf. Test_ipsec is not exactly using a security session.
Fixing that is out of scope of this patch.

> 
> <snip>
> 
> > diff --git a/app/test/test_security.c b/app/test/test_security.c index
> > 77fd5adc6..ed7de348f 100644
> > --- a/app/test/test_security.c
> > +++ b/app/test/test_security.c
> > @@ -237,6 +237,7 @@ static struct mock_session_create_data {
> >  	struct rte_security_session_conf *conf;
> >  	struct rte_security_session *sess;
> >  	struct rte_mempool *mp;
> > +	struct rte_mempool *priv_mp;
> >
> 
> <snip>
> 
> > 790,7 +809,7 @@ test_session_create_inv_mempool(void)
> >  	struct rte_security_session *sess;
> >
> >  	sess = rte_security_session_create(&ut_params->ctx, &ut_params-
> > >conf,
> > -			NULL);
> > +			NULL, NULL);
> 
> [DC] This test test_session_create_inv_mempool() should have the priv_mp set
> to a valid
> value (i.e. ts_params->session_priv_mpool), and a new test function should be
> added where
> mp is valid, but priv_mp is NULL - this way we test for validity of both mempools
> independently.

I would say that would be an overkill with not much gain.
Both mempool should be created before session is created. That is quite obvious. Isn't it?

> 
> <snip>
> 
> > a/doc/guides/prog_guide/rte_security.rst
> > b/doc/guides/prog_guide/rte_security.rst
> > index 127da2e4f..cff0653f5 100644
> > --- a/doc/guides/prog_guide/rte_security.rst
> > +++ b/doc/guides/prog_guide/rte_security.rst
> > @@ -533,8 +533,10 @@ and this allows further acceleration of the offload of
> > Crypto workloads.
> >
> >  The Security framework provides APIs to create and free sessions for
> > crypto/ethernet  devices, where sessions are mempool objects. It is the
> > application's responsibility -to create and manage the session mempools. The
> > mempool object size should be able to -accommodate the driver's private
> > data of security session.
> > +to create and manage two session mempools - one for session and other
> > +for session private data. The mempool object size should be able to
> > +accommodate the driver's private data of security session. The
> > +application can get the size of session private data using API
> > ``rte_security_session_get_size``.
> 
> [DC] This sentence should be updated to specify it's the private session data
> mempool that is being referred to
> 
> "The mempool object size should be able to accommodate the driver's private
> data of security session."
> =>
> "The private session data mempool object size should be able to accommodate
> the driver's private data of security
> session."
> 
> Also, a sentence about the required size of the session mempool should also be
> added.

Fixed in v2

> 
> <snip>
> 
> > diff --git a/doc/guides/rel_notes/release_20_11.rst
> > b/doc/guides/rel_notes/release_20_11.rst
> > index df227a177..04c1a1b81 100644
> > --- a/doc/guides/rel_notes/release_20_11.rst
> > +++ b/doc/guides/rel_notes/release_20_11.rst
> > @@ -84,6 +84,12 @@ API Changes
> >     Also, make sure to start the actual text at the margin.
> >     =======================================================
> >
> > +* security: The API ``rte_security_session_create`` is updated to take
> > +two
> > +  mempool objects one for session and other for session private data.
> > +  So the application need to create two mempools and get the size of
> > +session
> > +  private data using API ``rte_security_session_get_size`` for private
> > +session
> > +  mempool.
> > +
> 
> [DC]  Many of the PMDs which support security don't implement the
> session_get_size
> callback. There's probably a job here for each PMD owner to add support for this
> callback.
> 
If a PMD is supporting rte_security, then it should comply with the APIs which are required.

> >
> >  ABI Changes
> >  -----------
> > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-
> > secgw/ipsec-secgw.c
> > index 8ba15d23c..55a5ea9f4 100644
> > --- a/examples/ipsec-secgw/ipsec-secgw.c
> > +++ b/examples/ipsec-secgw/ipsec-secgw.c
> 
> <snip>
> 
> > @@ -2379,12 +2375,8 @@ session_priv_pool_init(struct socket_ctx *ctx,
> > int32_t socket_id,
> >
> >  	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> >  			"sess_mp_priv_%u", socket_id);
> > -	/*
> > -	 * Doubled due to rte_security_session_create() uses one mempool
> > for
> > -	 * session and for session private data.
> > -	 */
> >  	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> > -		rte_lcore_count()) * 2;
> > +		rte_lcore_count());
> 
> [DC] A change to double the number of sessions was made in test-crypto-perf
> when adding DOCSIS security protocol to this tester.
> It was needed as both session and private session data was pulled from same
> mempool.
> This change can now be reverted like this...

Fixed in v2

> 
> diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
> index 8f8e580e4..6a71aff5f 100644
> --- a/app/test-crypto-perf/main.c
> +++ b/app/test-crypto-perf/main.c
> @@ -248,7 +248,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts,
> uint8_t *enabled_cdevs)
>  #endif
>                 } else
>                         sessions_needed = enabled_cdev_count *
> -                                               opts->nb_qps * 2;
> +                                               opts->nb_qps;
> 
> <snip>
> 
> > git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
> > index 515c29e04..293ca747d 100644
> > --- a/lib/librte_security/rte_security.c
> > +++ b/lib/librte_security/rte_security.c
> > @@ -26,7 +26,8 @@
> >  struct rte_security_session *
> >  rte_security_session_create(struct rte_security_ctx *instance,
> >  			    struct rte_security_session_conf *conf,
> > -			    struct rte_mempool *mp)
> > +			    struct rte_mempool *mp,
> > +			    struct rte_mempool *priv_mp)
> >  {
> >  	struct rte_security_session *sess = NULL;
> 
> [DC] Need to add a validity check for priv_mp to rte_security_session_create().
> The cryptodev API checks both mp and priv_mp are not NULL, so security should
> do the same
> 
> RTE_PTR_OR_ERR_RET(priv_mp, NULL);

Fixed in v2
> 
> >
> 
> <snip>
> 
> > --
> > 2.17.1
> 
> [DC] This API change has highlighted a bug in the security callbacks in the AESNi-
> MB PMD, specifically in
> aesni_mb_pmd_sec_sess_destroy() in
> drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> 
> Before putting the private session data back to the mempool, this function
> clears the data with a memset.
> But the bug is that it cleared the security session struct instead of the private
> aesni_mb_session struct.
> This didn't show up previously because the elements of the mempool were large,
> because both security session and private session
> data came from the same mempool with large objects . But now that the
> security session mempool object are much smaller, this causes
> a seg fault
> 
> The fix is as follows:
> 
> diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> index 2362f0c3c..b11d7f12b 100644
> --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> @@ -911,7 +911,7 @@ aesni_mb_pmd_sec_sess_destroy(void *dev
> __rte_unused,
> 
>         if (sess_priv) {
>                 struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
> -               memset(sess, 0, sizeof(struct aesni_mb_session));
> +               memset(sess_priv, 0, sizeof(struct aesni_mb_session));
>                 set_sec_session_private_data(sess, NULL);
>                 rte_mempool_put(sess_mp, sess_priv);
>         }
> 
> Can this be fixed as part of this patchset or separate fix needed?

This patch is already applied on the tree now.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH V5 1/2] dpdk: resolve compiling errors for per-queue stats
  2020-10-09 20:32  0%             ` Ferruh Yigit
@ 2020-10-10  8:09  0%               ` Thomas Monjalon
  2020-10-12 17:02  0%                 ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-10-10  8:09 UTC (permalink / raw)
  To: Min Hu (Connor), Honnappa Nagarahalli, Ferruh Yigit
  Cc: Olivier Matz, Stephen Hemminger, techboard, bruce.richardson,
	jerinj, Ray Kinsella, dev

09/10/2020 22:32, Ferruh Yigit:
> On 10/6/2020 9:33 AM, Olivier Matz wrote:
> > On Mon, Oct 05, 2020 at 01:23:08PM +0100, Ferruh Yigit wrote:
> >> On 9/28/2020 4:43 PM, Stephen Hemminger wrote:
> >>> On Mon, 28 Sep 2020 17:24:26 +0200
> >>> Thomas Monjalon <thomas@monjalon.net> wrote:
> >>>> 28/09/2020 15:53, Ferruh Yigit:
> >>>>> On 9/28/2020 10:16 AM, Thomas Monjalon wrote:
> >>>>>> 28/09/2020 10:59, Ferruh Yigit:
> >>>>>>> On 9/27/2020 4:16 AM, Min Hu (Connor) wrote:
> >>>>>>>> From: Huisong Li <lihuisong@huawei.com>
> >>>>>>>>
> >>>>>>>> Currently, only statistics of rx/tx queues with queue_id less than
> >>>>>>>> RTE_ETHDEV_QUEUE_STAT_CNTRS can be displayed. If there is a certain
> >>>>>>>> application scenario that it needs to use 256 or more than 256 queues
> >>>>>>>> and display all statistics of rx/tx queue. At this moment, we have to
> >>>>>>>> change the macro to be equaled to the queue number.
> >>>>>>>>
> >>>>>>>> However, modifying the macro to be greater than 256 will trigger
> >>>>>>>> many errors and warnings from test-pmd, PMD drivers and librte_ethdev
> >>>>>>>> during compiling dpdk project. But it is possible and permitted that
> >>>>>>>> rx/tx queue number is greater than 256 and all statistics of rx/tx
> >>>>>>>> queue need to be displayed. In addition, the data type of rx/tx queue
> >>>>>>>> number in rte_eth_dev_configure API is 'uint16_t'. So It is unreasonable
> >>>>>>>> to use the 'uint8_t' type for variables that control which per-queue
> >>>>>>>> statistics can be displayed.
> >>>>>>
> >>>>>> The explanation is too much complex and misleading.
> >>>>>> You mean you cannot increase RTE_ETHDEV_QUEUE_STAT_CNTRS
> >>>>>> above 256 because it is an 8-bit type?
> >>>>>>
> >>>>>> [...]
> >>>>>>>> --- a/lib/librte_ethdev/rte_ethdev.h
> >>>>>>>> +++ b/lib/librte_ethdev/rte_ethdev.h
> >>>>>>>>      int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id,
> >>>>>>>> -		uint16_t tx_queue_id, uint8_t stat_idx);
> >>>>>>>> +		uint16_t tx_queue_id, uint16_t stat_idx);
> >>>>>> [...]
> >>>>>>>>      int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
> >>>>>>>>      					   uint16_t rx_queue_id,
> >>>>>>>> -					   uint8_t stat_idx);
> >>>>>>>> +					   uint16_t stat_idx);
> >>>>>> [...]
> >>>>>>> cc'ed tech-board,
> >>>>>>>
> >>>>>>> The patch breaks the ethdev ABI without a deprecation notice from previous
> >>>>>>> release(s).
> >>>>>>>
> >>>>>>> It is mainly a fix to the port_id storage type, which we have updated from
> >>>>>>> uint8_t to uint16_t in past but some seems remained for
> >>>>>>> 'rte_eth_dev_set_tx_queue_stats_mapping()' &
> >>>>>>> 'rte_eth_dev_set_rx_queue_stats_mapping()' APIs.
> >>>>>>
> >>>>>> No, it is not related to the port id, but the number of limited stats.
> >>>>>
> >>>>> Right, it is not related to the port id, it is fixing the storage type for index
> >>>>> used to map the queue stats.
> >>>>>>> Since the ethdev library already heavily breaks the ABI this release, I am for
> >>>>>>> getting this fix, instead of waiting the fix for one more year.
> >>>>>>
> >>>>>> If stats can be managed for more than 256 queues, I think it means
> >>>>>> it is not limited. In this case, we probably don't need the API
> >>>>>> *_queue_stats_mapping which was invented for a limitation of ixgbe.
> >>>>>>
> >>>>>> The problem is probably somewhere else (in testpmd),
> >>>>>> that's why I am against this patch.
> >>>>>
> >>>>> This patch is not to fix queue stats mapping, I agree there are problems related
> >>>>> to it, already shared as comment to this set.
> >>>>>
> >>>>> But this patch is to fix the build errors when 'RTE_ETHDEV_QUEUE_STAT_CNTRS'
> >>>>> needs to set more than 255. Where the build errors seems around the
> >>>>> stats_mapping APIs.
> >>>>
> >>>> It is not said this API is supposed to manage more than 256 queues mapping.
> >>>> In general we should not need this API.
> >>>> I think it is solving the wrong problem.
> >>>
> >>>
> >>> The original API is a band aid for the limited number of statistics counters
> >>> in the Intel IXGBE hardware. It crept into to the DPDK as an API. I would rather
> >>> have per-queue statistics and make ixgbe say "not supported"
> >>>
> >>
> >> The current issue is not directly related to '*_queue_stats_mapping' APIs.
> >>
> >> Problem is not able to set 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255.
> >> User may need to set the 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, since it is
> >> used to define size of the stats counter.
> >> "uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];"
> >>
> >> When 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, it gives multiple build errors,
> >> the one in the ethdev is like [1].
> >>
> >> This can be fixed two ways,
> >> a) increase the size of 'stat_idx' storage type to u16 in the
> >> '*_queue_stats_mapping' APIs, this is what this patch does.
> >> b) Fix with a casting in the comparison, without changing the APIs.
> >>
> >> I think both are OK, but is (b) more preferable?
> > 
> > I think the patch (a) is ok, knowing that RTE_ETHDEV_QUEUE_STAT_CNTRS is
> > not modified.
> > 
> > On the substance, I agree with Thomas that the queue_stats_mapping API
> > should be replaced by xstats.
> > 
> 
> This has been discussed in the last technical board meeting, the decision was to 
> use xstats to get queue related statistics [2].
> 
> But after second look, even if xstats is used to get statistics, 
> 'RTE_ETHDEV_QUEUE_STAT_CNTRS' is used, since xstats uses 'rte_eth_stats_get()' 
> to get queue statistics.
> So for the case device has more than 255 queues, 'RTE_ETHDEV_QUEUE_STAT_CNTRS' 
> still needs to be set > 255 which will cause the build error.

You're right, when using the old API in xstats implementation,
we are limited to RTE_ETHDEV_QUEUE_STAT_CNTRS queues.

> I have an AR to send a deprecation notice to current method to get the queue 
> statistics, and limit the old method to 256 queues. But since xstats is just a 
> wrapped to old method, I am not quite sure how deprecating it will work.
> 
> @Thomas, @Honnappa, can you give some more insight on the issue?

It becomes a PMD issue. The PMD implementation of xstats must complete
the statistics for the queues above RTE_ETHDEV_QUEUE_STAT_CNTRS.

In order to prepare the removal of the old method smoothly,
we could add a driver flag which indicates whether the PMD relies
on a pre-fill of xstats from old stats per queue conversion or not.


> [2]
> https://mails.dpdk.org/archives/dev/2020-October/185299.html




^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4 03/17] eal: rename lcore word choices
  @ 2020-10-09 21:38  1%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-09 21:38 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Anatoly Burakov

Replace master lcore with main lcore and
replace slave lcore with worker lcore.

Keep the old functions and macros but mark them as deprecated
for this release.

The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.

Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/rel_notes/deprecation.rst       | 19 -------
 doc/guides/rel_notes/release_20_11.rst     | 11 ++++
 lib/librte_eal/common/eal_common_dynmem.c  | 10 ++--
 lib/librte_eal/common/eal_common_launch.c  | 36 ++++++------
 lib/librte_eal/common/eal_common_lcore.c   |  8 +--
 lib/librte_eal/common/eal_common_options.c | 64 ++++++++++++----------
 lib/librte_eal/common/eal_options.h        |  2 +
 lib/librte_eal/common/eal_private.h        |  6 +-
 lib/librte_eal/common/rte_random.c         |  2 +-
 lib/librte_eal/common/rte_service.c        |  2 +-
 lib/librte_eal/freebsd/eal.c               | 28 +++++-----
 lib/librte_eal/freebsd/eal_thread.c        | 32 +++++------
 lib/librte_eal/include/rte_eal.h           |  4 +-
 lib/librte_eal/include/rte_eal_trace.h     |  4 +-
 lib/librte_eal/include/rte_launch.h        | 60 ++++++++++----------
 lib/librte_eal/include/rte_lcore.h         | 35 ++++++++----
 lib/librte_eal/linux/eal.c                 | 28 +++++-----
 lib/librte_eal/linux/eal_memory.c          | 10 ++--
 lib/librte_eal/linux/eal_thread.c          | 32 +++++------
 lib/librte_eal/rte_eal_version.map         |  2 +-
 lib/librte_eal/windows/eal.c               | 16 +++---
 lib/librte_eal/windows/eal_thread.c        | 30 +++++-----
 22 files changed, 230 insertions(+), 211 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 584e72087934..7271e9ca4d39 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -20,25 +20,6 @@ Deprecation Notices
 * kvargs: The function ``rte_kvargs_process`` will get a new parameter
   for returning key match count. It will ease handling of no-match case.
 
-* eal: To be more inclusive in choice of naming, the DPDK project
-  will replace uses of master/slave in the API's and command line arguments.
-
-  References to master/slave in relation to lcore will be renamed
-  to initial/worker.  The function ``rte_get_master_lcore()``
-  will be renamed to ``rte_get_initial_lcore()``.
-  For the 20.11 release, both names will be present and the
-  old function will be marked with the deprecated tag.
-  The old function will be removed in a future version.
-
-  The iterator for worker lcores will also change:
-  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
-  ``RTE_LCORE_FOREACH_WORKER``.
-
-  The ``master-lcore`` argument to testpmd will be replaced
-  with ``initial-lcore``. The old ``master-lcore`` argument
-  will produce a runtime notification in 20.11 release, and
-  be removed completely in a future release.
-
 * eal: The terms blacklist and whitelist to describe devices used
   by DPDK will be replaced in the 20.11 relase.
   This will apply to command line arguments as well as macros.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 808bdc4e5481..899f4312c736 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -252,6 +252,17 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* eal: Changed the function ``rte_get_master_lcore()`` is
+  replaced to ``rte_get_main_lcore()``. The old function is deprecated.
+
+  The iterator for worker lcores will also change:
+  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
+  ``RTE_LCORE_FOREACH_WORKER``.
+
+  The ``master-lcore`` argument to testpmd will be replaced
+  with ``main-lcore``. The old ``master-lcore`` argument
+  will produce a runtime notification in 20.11 release, and
+  be removed completely in a future release.
 
 ABI Changes
 -----------
diff --git a/lib/librte_eal/common/eal_common_dynmem.c b/lib/librte_eal/common/eal_common_dynmem.c
index 614648d8a4de..1cefe52443c4 100644
--- a/lib/librte_eal/common/eal_common_dynmem.c
+++ b/lib/librte_eal/common/eal_common_dynmem.c
@@ -427,19 +427,19 @@ eal_dynmem_calc_num_pages_per_socket(
 			total_size -= default_size;
 		}
 #else
-		/* in 32-bit mode, allocate all of the memory only on master
+		/* in 32-bit mode, allocate all of the memory only on main
 		 * lcore socket
 		 */
 		total_size = internal_conf->memory;
 		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
 				socket++) {
 			struct rte_config *cfg = rte_eal_get_configuration();
-			unsigned int master_lcore_socket;
+			unsigned int main_lcore_socket;
 
-			master_lcore_socket =
-				rte_lcore_to_socket_id(cfg->master_lcore);
+			main_lcore_socket =
+				rte_lcore_to_socket_id(cfg->main_lcore);
 
-			if (master_lcore_socket != socket)
+			if (main_lcore_socket != socket)
 				continue;
 
 			/* Update sizes */
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index cf52d717f68e..34f854ad80c8 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -21,55 +21,55 @@
  * Wait until a lcore finished its job.
  */
 int
-rte_eal_wait_lcore(unsigned slave_id)
+rte_eal_wait_lcore(unsigned worker_id)
 {
-	if (lcore_config[slave_id].state == WAIT)
+	if (lcore_config[worker_id].state == WAIT)
 		return 0;
 
-	while (lcore_config[slave_id].state != WAIT &&
-	       lcore_config[slave_id].state != FINISHED)
+	while (lcore_config[worker_id].state != WAIT &&
+	       lcore_config[worker_id].state != FINISHED)
 		rte_pause();
 
 	rte_rmb();
 
 	/* we are in finished state, go to wait state */
-	lcore_config[slave_id].state = WAIT;
-	return lcore_config[slave_id].ret;
+	lcore_config[worker_id].state = WAIT;
+	return lcore_config[worker_id].ret;
 }
 
 /*
- * Check that every SLAVE lcores are in WAIT state, then call
- * rte_eal_remote_launch() for all of them. If call_master is true
- * (set to CALL_MASTER), also call the function on the master lcore.
+ * Check that every WORKER lcores are in WAIT state, then call
+ * rte_eal_remote_launch() for all of them. If call_main is true
+ * (set to CALL_MAIN), also call the function on the main lcore.
  */
 int
 rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
-			 enum rte_rmt_call_master_t call_master)
+			 enum rte_rmt_call_main_t call_main)
 {
 	int lcore_id;
-	int master = rte_get_master_lcore();
+	int main_lcore = rte_get_main_lcore();
 
 	/* check state of lcores */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (lcore_config[lcore_id].state != WAIT)
 			return -EBUSY;
 	}
 
 	/* send messages to cores */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		rte_eal_remote_launch(f, arg, lcore_id);
 	}
 
-	if (call_master == CALL_MASTER) {
-		lcore_config[master].ret = f(arg);
-		lcore_config[master].state = FINISHED;
+	if (call_main == CALL_MAIN) {
+		lcore_config[main_lcore].ret = f(arg);
+		lcore_config[main_lcore].state = FINISHED;
 	}
 
 	return 0;
 }
 
 /*
- * Return the state of the lcore identified by slave_id.
+ * Return the state of the lcore identified by worker_id.
  */
 enum rte_lcore_state_t
 rte_eal_get_lcore_state(unsigned lcore_id)
@@ -86,7 +86,7 @@ rte_eal_mp_wait_lcore(void)
 {
 	unsigned lcore_id;
 
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		rte_eal_wait_lcore(lcore_id);
 	}
 }
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index d64569b3c758..66d6bad1a7d7 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -18,9 +18,9 @@
 #include "eal_private.h"
 #include "eal_thread.h"
 
-unsigned int rte_get_master_lcore(void)
+unsigned int rte_get_main_lcore(void)
 {
-	return rte_eal_get_configuration()->master_lcore;
+	return rte_eal_get_configuration()->main_lcore;
 }
 
 unsigned int rte_lcore_count(void)
@@ -93,7 +93,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
 	return cfg->lcore_role[lcore_id] == ROLE_RTE;
 }
 
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
 {
 	i++;
 	if (wrap)
@@ -101,7 +101,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
 
 	while (i < RTE_MAX_LCORE) {
 		if (!rte_lcore_is_enabled(i) ||
-		    (skip_master && (i == rte_get_master_lcore()))) {
+		    (skip_main && (i == rte_get_main_lcore()))) {
 			i++;
 			if (wrap)
 				i %= RTE_MAX_LCORE;
diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index a5426e12346a..d221886eb22c 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -81,6 +81,7 @@ eal_long_options[] = {
 	{OPT_TRACE_BUF_SIZE,    1, NULL, OPT_TRACE_BUF_SIZE_NUM   },
 	{OPT_TRACE_MODE,        1, NULL, OPT_TRACE_MODE_NUM       },
 	{OPT_MASTER_LCORE,      1, NULL, OPT_MASTER_LCORE_NUM     },
+	{OPT_MAIN_LCORE,        1, NULL, OPT_MAIN_LCORE_NUM       },
 	{OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM},
 	{OPT_NO_HPET,           0, NULL, OPT_NO_HPET_NUM          },
 	{OPT_NO_HUGE,           0, NULL, OPT_NO_HUGE_NUM          },
@@ -144,7 +145,7 @@ struct device_option {
 static struct device_option_list devopt_list =
 TAILQ_HEAD_INITIALIZER(devopt_list);
 
-static int master_lcore_parsed;
+static int main_lcore_parsed;
 static int mem_parsed;
 static int core_parsed;
 
@@ -575,12 +576,12 @@ eal_parse_service_coremask(const char *coremask)
 		for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE;
 				j++, idx++) {
 			if ((1 << j) & val) {
-				/* handle master lcore already parsed */
+				/* handle main lcore already parsed */
 				uint32_t lcore = idx;
-				if (master_lcore_parsed &&
-						cfg->master_lcore == lcore) {
+				if (main_lcore_parsed &&
+						cfg->main_lcore == lcore) {
 					RTE_LOG(ERR, EAL,
-						"lcore %u is master lcore, cannot use as service core\n",
+						"lcore %u is main lcore, cannot use as service core\n",
 						idx);
 					return -1;
 				}
@@ -748,12 +749,12 @@ eal_parse_service_corelist(const char *corelist)
 				min = idx;
 			for (idx = min; idx <= max; idx++) {
 				if (cfg->lcore_role[idx] != ROLE_SERVICE) {
-					/* handle master lcore already parsed */
+					/* handle main lcore already parsed */
 					uint32_t lcore = idx;
-					if (cfg->master_lcore == lcore &&
-							master_lcore_parsed) {
+					if (cfg->main_lcore == lcore &&
+							main_lcore_parsed) {
 						RTE_LOG(ERR, EAL,
-							"Error: lcore %u is master lcore, cannot use as service core\n",
+							"Error: lcore %u is main lcore, cannot use as service core\n",
 							idx);
 						return -1;
 					}
@@ -836,25 +837,25 @@ eal_parse_corelist(const char *corelist, int *cores)
 	return 0;
 }
 
-/* Changes the lcore id of the master thread */
+/* Changes the lcore id of the main thread */
 static int
-eal_parse_master_lcore(const char *arg)
+eal_parse_main_lcore(const char *arg)
 {
 	char *parsing_end;
 	struct rte_config *cfg = rte_eal_get_configuration();
 
 	errno = 0;
-	cfg->master_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
+	cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
 	if (errno || parsing_end[0] != 0)
 		return -1;
-	if (cfg->master_lcore >= RTE_MAX_LCORE)
+	if (cfg->main_lcore >= RTE_MAX_LCORE)
 		return -1;
-	master_lcore_parsed = 1;
+	main_lcore_parsed = 1;
 
-	/* ensure master core is not used as service core */
-	if (lcore_config[cfg->master_lcore].core_role == ROLE_SERVICE) {
+	/* ensure main core is not used as service core */
+	if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
 		RTE_LOG(ERR, EAL,
-			"Error: Master lcore is used as a service core\n");
+			"Error: Main lcore is used as a service core\n");
 		return -1;
 	}
 
@@ -1593,9 +1594,14 @@ eal_parse_common_option(int opt, const char *optarg,
 		break;
 
 	case OPT_MASTER_LCORE_NUM:
-		if (eal_parse_master_lcore(optarg) < 0) {
+		fprintf(stderr,
+			"Option --" OPT_MASTER_LCORE
+			" is deprecated use " OPT_MAIN_LCORE "\n");
+		/* fallthrough */
+	case OPT_MAIN_LCORE_NUM:
+		if (eal_parse_main_lcore(optarg) < 0) {
 			RTE_LOG(ERR, EAL, "invalid parameter for --"
-					OPT_MASTER_LCORE "\n");
+					OPT_MAIN_LCORE "\n");
 			return -1;
 		}
 		break;
@@ -1763,9 +1769,9 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
 
 	RTE_CPU_AND(cpuset, cpuset, &default_set);
 
-	/* if no remaining cpu, use master lcore cpu affinity */
+	/* if no remaining cpu, use main lcore cpu affinity */
 	if (!CPU_COUNT(cpuset)) {
-		memcpy(cpuset, &lcore_config[rte_get_master_lcore()].cpuset,
+		memcpy(cpuset, &lcore_config[rte_get_main_lcore()].cpuset,
 			sizeof(*cpuset));
 	}
 }
@@ -1797,12 +1803,12 @@ eal_adjust_config(struct internal_config *internal_cfg)
 	if (internal_conf->process_type == RTE_PROC_AUTO)
 		internal_conf->process_type = eal_proc_type_detect();
 
-	/* default master lcore is the first one */
-	if (!master_lcore_parsed) {
-		cfg->master_lcore = rte_get_next_lcore(-1, 0, 0);
-		if (cfg->master_lcore >= RTE_MAX_LCORE)
+	/* default main lcore is the first one */
+	if (!main_lcore_parsed) {
+		cfg->main_lcore = rte_get_next_lcore(-1, 0, 0);
+		if (cfg->main_lcore >= RTE_MAX_LCORE)
 			return -1;
-		lcore_config[cfg->master_lcore].core_role = ROLE_RTE;
+		lcore_config[cfg->main_lcore].core_role = ROLE_RTE;
 	}
 
 	compute_ctrl_threads_cpuset(internal_cfg);
@@ -1822,8 +1828,8 @@ eal_check_common_options(struct internal_config *internal_cfg)
 	const struct internal_config *internal_conf =
 		eal_get_internal_configuration();
 
-	if (cfg->lcore_role[cfg->master_lcore] != ROLE_RTE) {
-		RTE_LOG(ERR, EAL, "Master lcore is not enabled for DPDK\n");
+	if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+		RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n");
 		return -1;
 	}
 
@@ -1921,7 +1927,7 @@ eal_common_usage(void)
 	       "                      '( )' can be omitted for single element group,\n"
 	       "                      '@' can be omitted if cpus and lcores have the same value\n"
 	       "  -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n"
-	       "  --"OPT_MASTER_LCORE" ID   Core ID that is used as master\n"
+	       "  --"OPT_MAIN_LCORE" ID     Core ID that is used as main\n"
 	       "  --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n"
 	       "  -n CHANNELS         Number of memory channels\n"
 	       "  -m MB               Memory to allocate (see also --"OPT_SOCKET_MEM")\n"
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index 89769d48b487..d363228a7a25 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -43,6 +43,8 @@ enum {
 	OPT_TRACE_BUF_SIZE_NUM,
 #define OPT_TRACE_MODE        "trace-mode"
 	OPT_TRACE_MODE_NUM,
+#define OPT_MAIN_LCORE        "main-lcore"
+	OPT_MAIN_LCORE_NUM,
 #define OPT_MASTER_LCORE      "master-lcore"
 	OPT_MASTER_LCORE_NUM,
 #define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name"
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index a6a6381567f4..4684c4c7df19 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -20,8 +20,8 @@
  */
 struct lcore_config {
 	pthread_t thread_id;       /**< pthread identifier */
-	int pipe_master2slave[2];  /**< communication pipe with master */
-	int pipe_slave2master[2];  /**< communication pipe with master */
+	int pipe_main2worker[2];   /**< communication pipe with main */
+	int pipe_worker2main[2];   /**< communication pipe with main */
 
 	lcore_function_t * volatile f; /**< function to call */
 	void * volatile arg;       /**< argument of function */
@@ -42,7 +42,7 @@ extern struct lcore_config lcore_config[RTE_MAX_LCORE];
  * The global RTE configuration structure.
  */
 struct rte_config {
-	uint32_t master_lcore;       /**< Id of the master lcore */
+	uint32_t main_lcore;         /**< Id of the main lcore */
 	uint32_t lcore_count;        /**< Number of available logical cores. */
 	uint32_t numa_node_count;    /**< Number of detected NUMA nodes. */
 	uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
diff --git a/lib/librte_eal/common/rte_random.c b/lib/librte_eal/common/rte_random.c
index b2c5416b331d..ce21c2242a22 100644
--- a/lib/librte_eal/common/rte_random.c
+++ b/lib/librte_eal/common/rte_random.c
@@ -122,7 +122,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
 	lcore_id = rte_lcore_id();
 
 	if (unlikely(lcore_id == LCORE_ID_ANY))
-		lcore_id = rte_get_master_lcore();
+		lcore_id = rte_get_main_lcore();
 
 	return &rand_states[lcore_id];
 }
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index 98565bbef340..6c955d319ad4 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -107,7 +107,7 @@ rte_service_init(void)
 	struct rte_config *cfg = rte_eal_get_configuration();
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
 		if (lcore_config[i].core_role == ROLE_SERVICE) {
-			if ((unsigned int)i == cfg->master_lcore)
+			if ((unsigned int)i == cfg->main_lcore)
 				continue;
 			rte_service_lcore_add(i);
 			count++;
diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
index ccea60afe77b..d6ea02375025 100644
--- a/lib/librte_eal/freebsd/eal.c
+++ b/lib/librte_eal/freebsd/eal.c
@@ -625,10 +625,10 @@ eal_check_mem_on_local_socket(void)
 	int socket_id;
 	const struct rte_config *config = rte_eal_get_configuration();
 
-	socket_id = rte_lcore_to_socket_id(config->master_lcore);
+	socket_id = rte_lcore_to_socket_id(config->main_lcore);
 
 	if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
-		RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+		RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
 }
 
 
@@ -851,29 +851,29 @@ rte_eal_init(int argc, char **argv)
 	eal_check_mem_on_local_socket();
 
 	if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
-			&lcore_config[config->master_lcore].cpuset) != 0) {
+			&lcore_config[config->main_lcore].cpuset) != 0) {
 		rte_eal_init_alert("Cannot set affinity");
 		rte_errno = EINVAL;
 		return -1;
 	}
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
 
-	RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
-		config->master_lcore, thread_id, cpuset,
+	RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
+		config->main_lcore, thread_id, cpuset,
 		ret == 0 ? "" : "...");
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (pipe(lcore_config[i].pipe_master2slave) < 0)
+		if (pipe(lcore_config[i].pipe_main2worker) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (pipe(lcore_config[i].pipe_slave2master) < 0)
+		if (pipe(lcore_config[i].pipe_worker2main) < 0)
 			rte_panic("Cannot create pipe\n");
 
 		lcore_config[i].state = WAIT;
@@ -886,7 +886,7 @@ rte_eal_init(int argc, char **argv)
 
 		/* Set thread_name for aid in debugging. */
 		snprintf(thread_name, sizeof(thread_name),
-				"lcore-slave-%d", i);
+				"lcore-worker-%d", i);
 		rte_thread_setname(lcore_config[i].thread_id, thread_name);
 
 		ret = pthread_setaffinity_np(lcore_config[i].thread_id,
@@ -896,10 +896,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 
 	/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/freebsd/eal_thread.c b/lib/librte_eal/freebsd/eal_thread.c
index 99b5fefc4c5b..1dce9b04f24a 100644
--- a/lib/librte_eal/freebsd/eal_thread.c
+++ b/lib/librte_eal/freebsd/eal_thread.c
@@ -26,35 +26,35 @@
 #include "eal_thread.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 	int rc = -EBUSY;
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		goto finish;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = write(m2s, &c, 1);
+		n = write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = read(s2m, &c, 1);
+		n = read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
 
 	rc = 0;
 finish:
-	rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+	rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
 	return rc;
 }
 
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
 	int n, ret;
 	unsigned lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
 
 		/* wait command */
 		do {
-			n = read(m2s, &c, 1);
+			n = read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = write(s2m, &c, 1);
+			n = write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index ddcf6a2e7a1a..f8f0d74b476c 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -65,11 +65,11 @@ int rte_eal_iopl_init(void);
 /**
  * Initialize the Environment Abstraction Layer (EAL).
  *
- * This function is to be executed on the MASTER lcore only, as soon
+ * This function is to be executed on the MAIN lcore only, as soon
  * as possible in the application's main() function.
  *
  * The function finishes the initialization process before main() is called.
- * It puts the SLAVE lcores in the WAIT state.
+ * It puts the WORKER lcores in the WAIT state.
  *
  * When the multi-partition feature is supported, depending on the
  * configuration (if CONFIG_RTE_EAL_MAIN_PARTITION is disabled), this
diff --git a/lib/librte_eal/include/rte_eal_trace.h b/lib/librte_eal/include/rte_eal_trace.h
index 19df549d29be..495ae1ee1d61 100644
--- a/lib/librte_eal/include/rte_eal_trace.h
+++ b/lib/librte_eal/include/rte_eal_trace.h
@@ -264,10 +264,10 @@ RTE_TRACE_POINT(
 RTE_TRACE_POINT(
 	rte_eal_trace_thread_remote_launch,
 	RTE_TRACE_POINT_ARGS(int (*f)(void *), void *arg,
-		unsigned int slave_id, int rc),
+		unsigned int worker_id, int rc),
 	rte_trace_point_emit_ptr(f);
 	rte_trace_point_emit_ptr(arg);
-	rte_trace_point_emit_u32(slave_id);
+	rte_trace_point_emit_u32(worker_id);
 	rte_trace_point_emit_int(rc);
 )
 RTE_TRACE_POINT(
diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h
index 06a671752ace..22a901ce62f6 100644
--- a/lib/librte_eal/include/rte_launch.h
+++ b/lib/librte_eal/include/rte_launch.h
@@ -32,12 +32,12 @@ typedef int (lcore_function_t)(void *);
 /**
  * Launch a function on another lcore.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * Sends a message to a slave lcore (identified by the slave_id) that
+ * Sends a message to a worker lcore (identified by the worker_id) that
  * is in the WAIT state (this is true after the first call to
  * rte_eal_init()). This can be checked by first calling
- * rte_eal_wait_lcore(slave_id).
+ * rte_eal_wait_lcore(worker_id).
  *
  * When the remote lcore receives the message, it switches to
  * the RUNNING state, then calls the function f with argument arg. Once the
@@ -45,7 +45,7 @@ typedef int (lcore_function_t)(void *);
  * the return value of f is stored in a local variable to be read using
  * rte_eal_wait_lcore().
  *
- * The MASTER lcore returns as soon as the message is sent and knows
+ * The MAIN lcore returns as soon as the message is sent and knows
  * nothing about the completion of f.
  *
  * Note: This function is not designed to offer optimum
@@ -56,37 +56,41 @@ typedef int (lcore_function_t)(void *);
  *   The function to be called.
  * @param arg
  *   The argument for the function.
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore on which the function should be executed.
  * @return
  *   - 0: Success. Execution of function f started on the remote lcore.
  *   - (-EBUSY): The remote lcore is not in a WAIT state.
  */
-int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id);
+int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned worker_id);
 
 /**
- * This enum indicates whether the master core must execute the handler
+ * This enum indicates whether the main core must execute the handler
  * launched on all logical cores.
  */
-enum rte_rmt_call_master_t {
-	SKIP_MASTER = 0, /**< lcore handler not executed by master core. */
-	CALL_MASTER,     /**< lcore handler executed by master core. */
+enum rte_rmt_call_main_t {
+	SKIP_MAIN = 0, /**< lcore handler not executed by main core. */
+	CALL_MAIN,     /**< lcore handler executed by main core. */
 };
 
+/* These legacy definitions will be removed in future release */
+#define SKIP_MASTER	RTE_DEPRECATED(SKIP_MASTER) SKIP_MAIN
+#define CALL_MASTER	RTE_DEPRECATED(CALL_MASTER) CALL_MAIN
+
 /**
  * Launch a function on all lcores.
  *
- * Check that each SLAVE lcore is in a WAIT state, then call
+ * Check that each WORKER lcore is in a WAIT state, then call
  * rte_eal_remote_launch() for each lcore.
  *
  * @param f
  *   The function to be called.
  * @param arg
  *   The argument for the function.
- * @param call_master
- *   If call_master set to SKIP_MASTER, the MASTER lcore does not call
- *   the function. If call_master is set to CALL_MASTER, the function
- *   is also called on master before returning. In any case, the master
+ * @param call_main
+ *   If call_main set to SKIP_MAIN, the MAIN lcore does not call
+ *   the function. If call_main is set to CALL_MAIN, the function
+ *   is also called on main before returning. In any case, the main
  *   lcore returns as soon as it finished its job and knows nothing
  *   about the completion of f on the other lcores.
  * @return
@@ -95,49 +99,49 @@ enum rte_rmt_call_master_t {
  *     case, no message is sent to any of the lcores.
  */
 int rte_eal_mp_remote_launch(lcore_function_t *f, void *arg,
-			     enum rte_rmt_call_master_t call_master);
+			     enum rte_rmt_call_main_t call_main);
 
 /**
- * Get the state of the lcore identified by slave_id.
+ * Get the state of the lcore identified by worker_id.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore.
  * @return
  *   The state of the lcore.
  */
-enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned slave_id);
+enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned int worker_id);
 
 /**
  * Wait until an lcore finishes its job.
  *
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
  *
- * If the slave lcore identified by the slave_id is in a FINISHED state,
+ * If the worker lcore identified by the worker_id is in a FINISHED state,
  * switch to the WAIT state. If the lcore is in RUNNING state, wait until
  * the lcore finishes its job and moves to the FINISHED state.
  *
- * @param slave_id
+ * @param worker_id
  *   The identifier of the lcore.
  * @return
- *   - 0: If the lcore identified by the slave_id is in a WAIT state.
+ *   - 0: If the lcore identified by the worker_id is in a WAIT state.
  *   - The value that was returned by the previous remote launch
- *     function call if the lcore identified by the slave_id was in a
+ *     function call if the lcore identified by the worker_id was in a
  *     FINISHED or RUNNING state. In this case, it changes the state
  *     of the lcore to WAIT.
  */
-int rte_eal_wait_lcore(unsigned slave_id);
+int rte_eal_wait_lcore(unsigned worker_id);
 
 /**
  * Wait until all lcores finish their jobs.
  *
- * To be executed on the MASTER lcore only. Issue an
+ * To be executed on the MAIN lcore only. Issue an
  * rte_eal_wait_lcore() for every lcore. The return values are
  * ignored.
  *
  * After a call to rte_eal_mp_wait_lcore(), the caller can assume
- * that all slave lcores are in a WAIT state.
+ * that all worker lcores are in a WAIT state.
  */
 void rte_eal_mp_wait_lcore(void);
 
diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h
index b8b64a625200..48b87e253afa 100644
--- a/lib/librte_eal/include/rte_lcore.h
+++ b/lib/librte_eal/include/rte_lcore.h
@@ -78,12 +78,24 @@ rte_lcore_id(void)
 }
 
 /**
- * Get the id of the master lcore
+ * Get the id of the main lcore
  *
  * @return
- *   the id of the master lcore
+ *   the id of the main lcore
  */
-unsigned int rte_get_master_lcore(void);
+unsigned int rte_get_main_lcore(void);
+
+/**
+ * Deprecated function the id of the main lcore
+ *
+ * @return
+ *   the id of the main lcore
+ */
+__rte_deprecated
+static inline unsigned int rte_get_master_lcore(void)
+{
+	return rte_get_main_lcore();
+}
 
 /**
  * Return the number of execution units (lcores) on the system.
@@ -203,32 +215,35 @@ int rte_lcore_is_enabled(unsigned int lcore_id);
  *
  * @param i
  *   The current lcore (reference).
- * @param skip_master
- *   If true, do not return the ID of the master lcore.
+ * @param skip_main
+ *   If true, do not return the ID of the main lcore.
  * @param wrap
  *   If true, go back to 0 when RTE_MAX_LCORE is reached; otherwise,
  *   return RTE_MAX_LCORE.
  * @return
  *   The next lcore_id or RTE_MAX_LCORE if not found.
  */
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap);
 
 /**
  * Macro to browse all running lcores.
  */
 #define RTE_LCORE_FOREACH(i)						\
 	for (i = rte_get_next_lcore(-1, 0, 0);				\
-	     i<RTE_MAX_LCORE;						\
+	     i < RTE_MAX_LCORE;						\
 	     i = rte_get_next_lcore(i, 0, 0))
 
 /**
- * Macro to browse all running lcores except the master lcore.
+ * Macro to browse all running lcores except the main lcore.
  */
-#define RTE_LCORE_FOREACH_SLAVE(i)					\
+#define RTE_LCORE_FOREACH_WORKER(i)					\
 	for (i = rte_get_next_lcore(-1, 1, 0);				\
-	     i<RTE_MAX_LCORE;						\
+	     i < RTE_MAX_LCORE;						\
 	     i = rte_get_next_lcore(i, 1, 0))
 
+#define RTE_LCORE_FOREACH_SLAVE(l)					\
+	RTE_DEPRECATED(RTE_LCORE_FOREACH_SLAVE) RTE_LCORE_FOREACH_WORKER(l)
+
 /**
  * Callback prototype for initializing lcores.
  *
diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
index 9cf0e2ec0137..1c9dd8db1e6a 100644
--- a/lib/librte_eal/linux/eal.c
+++ b/lib/librte_eal/linux/eal.c
@@ -883,10 +883,10 @@ eal_check_mem_on_local_socket(void)
 	int socket_id;
 	const struct rte_config *config = rte_eal_get_configuration();
 
-	socket_id = rte_lcore_to_socket_id(config->master_lcore);
+	socket_id = rte_lcore_to_socket_id(config->main_lcore);
 
 	if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
-		RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+		RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
 }
 
 static int
@@ -1215,28 +1215,28 @@ rte_eal_init(int argc, char **argv)
 	eal_check_mem_on_local_socket();
 
 	if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
-			&lcore_config[config->master_lcore].cpuset) != 0) {
+			&lcore_config[config->main_lcore].cpuset) != 0) {
 		rte_eal_init_alert("Cannot set affinity");
 		rte_errno = EINVAL;
 		return -1;
 	}
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
-	RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
-		config->master_lcore, (uintptr_t)thread_id, cpuset,
+	RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+		config->main_lcore, (uintptr_t)thread_id, cpuset,
 		ret == 0 ? "" : "...");
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (pipe(lcore_config[i].pipe_master2slave) < 0)
+		if (pipe(lcore_config[i].pipe_main2worker) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (pipe(lcore_config[i].pipe_slave2master) < 0)
+		if (pipe(lcore_config[i].pipe_worker2main) < 0)
 			rte_panic("Cannot create pipe\n");
 
 		lcore_config[i].state = WAIT;
@@ -1249,7 +1249,7 @@ rte_eal_init(int argc, char **argv)
 
 		/* Set thread_name for aid in debugging. */
 		snprintf(thread_name, sizeof(thread_name),
-			"lcore-slave-%d", i);
+			"lcore-worker-%d", i);
 		ret = rte_thread_setname(lcore_config[i].thread_id,
 						thread_name);
 		if (ret != 0)
@@ -1263,10 +1263,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 
 	/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 89725291b0ce..3e47efe58212 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -1737,7 +1737,7 @@ memseg_primary_init_32(void)
 	/* the allocation logic is a little bit convoluted, but here's how it
 	 * works, in a nutshell:
 	 *  - if user hasn't specified on which sockets to allocate memory via
-	 *    --socket-mem, we allocate all of our memory on master core socket.
+	 *    --socket-mem, we allocate all of our memory on main core socket.
 	 *  - if user has specified sockets to allocate memory on, there may be
 	 *    some "unused" memory left (e.g. if user has specified --socket-mem
 	 *    such that not all memory adds up to 2 gigabytes), so add it to all
@@ -1751,7 +1751,7 @@ memseg_primary_init_32(void)
 	for (i = 0; i < rte_socket_count(); i++) {
 		int hp_sizes = (int) internal_conf->num_hugepage_sizes;
 		uint64_t max_socket_mem, cur_socket_mem;
-		unsigned int master_lcore_socket;
+		unsigned int main_lcore_socket;
 		struct rte_config *cfg = rte_eal_get_configuration();
 		bool skip;
 
@@ -1767,10 +1767,10 @@ memseg_primary_init_32(void)
 		skip = active_sockets != 0 &&
 				internal_conf->socket_mem[socket_id] == 0;
 		/* ...or if we didn't specifically request memory on *any*
-		 * socket, and this is not master lcore
+		 * socket, and this is not main lcore
 		 */
-		master_lcore_socket = rte_lcore_to_socket_id(cfg->master_lcore);
-		skip |= active_sockets == 0 && socket_id != master_lcore_socket;
+		main_lcore_socket = rte_lcore_to_socket_id(cfg->main_lcore);
+		skip |= active_sockets == 0 && socket_id != main_lcore_socket;
 
 		if (skip) {
 			RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n",
diff --git a/lib/librte_eal/linux/eal_thread.c b/lib/librte_eal/linux/eal_thread.c
index 068de2559555..83c2034b93d5 100644
--- a/lib/librte_eal/linux/eal_thread.c
+++ b/lib/librte_eal/linux/eal_thread.c
@@ -26,35 +26,35 @@
 #include "eal_thread.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned int worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 	int rc = -EBUSY;
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		goto finish;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = write(m2s, &c, 1);
+		n = write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = read(s2m, &c, 1);
+		n = read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
 
 	rc = 0;
 finish:
-	rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+	rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
 	return rc;
 }
 
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
 	int n, ret;
 	unsigned lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
 
 		/* wait command */
 		do {
-			n = read(m2s, &c, 1);
+			n = read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = write(s2m, &c, 1);
+			n = write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index a93dea9fe616..33ee2748ede0 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -74,7 +74,7 @@ DPDK_21 {
 	rte_free;
 	rte_get_hpet_cycles;
 	rte_get_hpet_hz;
-	rte_get_master_lcore;
+	rte_get_main_lcore;
 	rte_get_next_lcore;
 	rte_get_tsc_hz;
 	rte_hexdump;
diff --git a/lib/librte_eal/windows/eal.c b/lib/librte_eal/windows/eal.c
index bc48f27ab39a..cbca20956210 100644
--- a/lib/librte_eal/windows/eal.c
+++ b/lib/librte_eal/windows/eal.c
@@ -350,8 +350,8 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	__rte_thread_init(config->master_lcore,
-		&lcore_config[config->master_lcore].cpuset);
+	__rte_thread_init(config->main_lcore,
+		&lcore_config[config->main_lcore].cpuset);
 
 	bscan = rte_bus_scan();
 	if (bscan < 0) {
@@ -360,16 +360,16 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	RTE_LCORE_FOREACH_SLAVE(i) {
+	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
-		 * create communication pipes between master thread
+		 * create communication pipes between main thread
 		 * and children
 		 */
-		if (_pipe(lcore_config[i].pipe_master2slave,
+		if (_pipe(lcore_config[i].pipe_main2worker,
 			sizeof(char), _O_BINARY) < 0)
 			rte_panic("Cannot create pipe\n");
-		if (_pipe(lcore_config[i].pipe_slave2master,
+		if (_pipe(lcore_config[i].pipe_worker2main,
 			sizeof(char), _O_BINARY) < 0)
 			rte_panic("Cannot create pipe\n");
 
@@ -394,10 +394,10 @@ rte_eal_init(int argc, char **argv)
 	}
 
 	/*
-	 * Launch a dummy function on all slave lcores, so that master lcore
+	 * Launch a dummy function on all worker lcores, so that main lcore
 	 * knows they are all ready when this function returns.
 	 */
-	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+	rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
 	rte_eal_mp_wait_lcore();
 	return fctret;
 }
diff --git a/lib/librte_eal/windows/eal_thread.c b/lib/librte_eal/windows/eal_thread.c
index 20889b6196c9..908e726d16cc 100644
--- a/lib/librte_eal/windows/eal_thread.c
+++ b/lib/librte_eal/windows/eal_thread.c
@@ -17,34 +17,34 @@
 #include "eal_windows.h"
 
 /*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
  * function f with argument arg. Once the execution is done, the
  * remote lcore switch in FINISHED state.
  */
 int
-rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id)
+rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
 {
 	int n;
 	char c = 0;
-	int m2s = lcore_config[slave_id].pipe_master2slave[1];
-	int s2m = lcore_config[slave_id].pipe_slave2master[0];
+	int m2w = lcore_config[worker_id].pipe_main2worker[1];
+	int w2m = lcore_config[worker_id].pipe_worker2main[0];
 
-	if (lcore_config[slave_id].state != WAIT)
+	if (lcore_config[worker_id].state != WAIT)
 		return -EBUSY;
 
-	lcore_config[slave_id].f = f;
-	lcore_config[slave_id].arg = arg;
+	lcore_config[worker_id].f = f;
+	lcore_config[worker_id].arg = arg;
 
 	/* send message */
 	n = 0;
 	while (n == 0 || (n < 0 && errno == EINTR))
-		n = _write(m2s, &c, 1);
+		n = _write(m2w, &c, 1);
 	if (n < 0)
 		rte_panic("cannot write on configuration pipe\n");
 
 	/* wait ack */
 	do {
-		n = _read(s2m, &c, 1);
+		n = _read(w2m, &c, 1);
 	} while (n < 0 && errno == EINTR);
 
 	if (n <= 0)
@@ -61,21 +61,21 @@ eal_thread_loop(void *arg __rte_unused)
 	int n, ret;
 	unsigned int lcore_id;
 	pthread_t thread_id;
-	int m2s, s2m;
+	int m2w, w2m;
 	char cpuset[RTE_CPU_AFFINITY_STR_LEN];
 
 	thread_id = pthread_self();
 
 	/* retrieve our lcore_id from the configuration structure */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
 		if (thread_id == lcore_config[lcore_id].thread_id)
 			break;
 	}
 	if (lcore_id == RTE_MAX_LCORE)
 		rte_panic("cannot retrieve lcore id\n");
 
-	m2s = lcore_config[lcore_id].pipe_master2slave[0];
-	s2m = lcore_config[lcore_id].pipe_slave2master[1];
+	m2w = lcore_config[lcore_id].pipe_main2worker[0];
+	w2m = lcore_config[lcore_id].pipe_worker2main[1];
 
 	__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
 
@@ -88,7 +88,7 @@ eal_thread_loop(void *arg __rte_unused)
 
 		/* wait command */
 		do {
-			n = _read(m2s, &c, 1);
+			n = _read(m2w, &c, 1);
 		} while (n < 0 && errno == EINTR);
 
 		if (n <= 0)
@@ -99,7 +99,7 @@ eal_thread_loop(void *arg __rte_unused)
 		/* send ack */
 		n = 0;
 		while (n == 0 || (n < 0 && errno == EINTR))
-			n = _write(s2m, &c, 1);
+			n = _write(w2m, &c, 1);
 		if (n < 0)
 			rte_panic("cannot write on configuration pipe\n");
 
-- 
2.27.0


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [dpdk-dev v11 1/4] cryptodev: change crypto symmetric vector structure
  @ 2020-10-09 21:11  3%   ` Fan Zhang
    1 sibling, 0 replies; 200+ results
From: Fan Zhang @ 2020-10-09 21:11 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, Fan Zhang, Adam Dybkowski, Konstantin Ananyev

This patch updates ``rte_crypto_sym_vec`` structure to add
support for both cpu_crypto synchrounous operation and
asynchronous raw data-path APIs. The patch also includes
AESNI-MB and AESNI-GCM PMD changes, unit test changes and
documentation updates.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test/test_cryptodev.c                  | 25 ++++++++------
 doc/guides/prog_guide/cryptodev_lib.rst    |  3 +-
 doc/guides/rel_notes/release_20_11.rst     |  3 ++
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c   | 18 +++++-----
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c |  9 +++--
 lib/librte_cryptodev/rte_crypto_sym.h      | 40 ++++++++++++++++------
 lib/librte_ipsec/esp_inb.c                 | 12 +++----
 lib/librte_ipsec/esp_outb.c                | 12 +++----
 lib/librte_ipsec/misc.h                    |  6 ++--
 9 files changed, 79 insertions(+), 49 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ac2a36bc2..62a265520 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -151,11 +151,11 @@ static void
 process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
 {
 	int32_t n, st;
-	void *iv;
 	struct rte_crypto_sym_op *sop;
 	union rte_crypto_sym_ofs ofs;
 	struct rte_crypto_sgl sgl;
 	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_va_iova_ptr iv_ptr, aad_ptr, digest_ptr;
 	struct rte_crypto_vec vec[UINT8_MAX];
 
 	sop = op->sym;
@@ -171,13 +171,17 @@ process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
 	sgl.vec = vec;
 	sgl.num = n;
 	symvec.sgl = &sgl;
-	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
-	symvec.iv = &iv;
-	symvec.aad = (void **)&sop->aead.aad.data;
-	symvec.digest = (void **)&sop->aead.digest.data;
+	symvec.iv = &iv_ptr;
+	symvec.digest = &digest_ptr;
+	symvec.aad = &aad_ptr;
 	symvec.status = &st;
 	symvec.num = 1;
 
+	/* for CPU crypto the IOVA address is not required */
+	iv_ptr.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	digest_ptr.va = (void *)sop->aead.digest.data;
+	aad_ptr.va = (void *)sop->aead.aad.data;
+
 	ofs.raw = 0;
 
 	n = rte_cryptodev_sym_cpu_crypto_process(dev_id, sop->session, ofs,
@@ -193,11 +197,11 @@ static void
 process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
 {
 	int32_t n, st;
-	void *iv;
 	struct rte_crypto_sym_op *sop;
 	union rte_crypto_sym_ofs ofs;
 	struct rte_crypto_sgl sgl;
 	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_va_iova_ptr iv_ptr, digest_ptr;
 	struct rte_crypto_vec vec[UINT8_MAX];
 
 	sop = op->sym;
@@ -213,13 +217,14 @@ process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
 	sgl.vec = vec;
 	sgl.num = n;
 	symvec.sgl = &sgl;
-	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
-	symvec.iv = &iv;
-	symvec.aad = (void **)&sop->aead.aad.data;
-	symvec.digest = (void **)&sop->auth.digest.data;
+	symvec.iv = &iv_ptr;
+	symvec.digest = &digest_ptr;
 	symvec.status = &st;
 	symvec.num = 1;
 
+	iv_ptr.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	digest_ptr.va = (void *)sop->auth.digest.data;
+
 	ofs.raw = 0;
 	ofs.ofs.cipher.head = sop->cipher.data.offset - sop->auth.data.offset;
 	ofs.ofs.cipher.tail = (sop->auth.data.offset + sop->auth.data.length) -
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c14f750fa..e7ba35c2d 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -620,7 +620,8 @@ operation descriptor (``struct rte_crypto_sym_vec``) containing:
   descriptors of performed operations (``struct rte_crypto_sgl``). Each instance
   of ``struct rte_crypto_sgl`` consists of a number of segments and a pointer to
   an array of segment descriptors ``struct rte_crypto_vec``;
-- pointers to arrays of size ``num`` containing IV, AAD and digest information,
+- pointers to arrays of size ``num`` containing IV, AAD and digest information
+  in the ``cpu_crypto`` sub-structure,
 - pointer to an array of size ``num`` where status information will be stored
   for each operation.
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 8b911488c..2973b2a33 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -302,6 +302,9 @@ API Changes
   ``rte_fpga_lte_fec_configure`` and structure ``fpga_lte_fec_conf`` to
   ``rte_fpga_lte_fec_conf``.
 
+* The structure ``rte_crypto_sym_vec`` is updated to support both
+  cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
+
 
 ABI Changes
 -----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1d2a0ce00..973b61bd6 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -464,9 +464,10 @@ aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
 		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+			&vec->sgl[i], vec->iv[i].va,
+			vec->aad[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -482,9 +483,10 @@ aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
 		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+			&vec->sgl[i], vec->iv[i].va,
+			vec->aad[i].va);
 		 vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -505,9 +507,9 @@ aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
 		}
 
 		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i]);
+			&vec->sgl[i], vec->iv[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
@@ -528,9 +530,9 @@ aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
 		}
 
 		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i]);
+			&vec->sgl[i], vec->iv[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
-			gdata_ctx, vec->digest[i]);
+			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
 	}
 
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 34a39ca99..39f90f537 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -1877,7 +1877,7 @@ generate_sync_dgst(struct rte_crypto_sym_vec *vec,
 
 	for (i = 0, k = 0; i != vec->num; i++) {
 		if (vec->status[i] == 0) {
-			memcpy(vec->digest[i], dgst[i], len);
+			memcpy(vec->digest[i].va, dgst[i], len);
 			k++;
 		}
 	}
@@ -1893,7 +1893,7 @@ verify_sync_dgst(struct rte_crypto_sym_vec *vec,
 
 	for (i = 0, k = 0; i != vec->num; i++) {
 		if (vec->status[i] == 0) {
-			if (memcmp(vec->digest[i], dgst[i], len) != 0)
+			if (memcmp(vec->digest[i].va, dgst[i], len) != 0)
 				vec->status[i] = EBADMSG;
 			else
 				k++;
@@ -1956,9 +1956,8 @@ aesni_mb_cpu_crypto_process_bulk(struct rte_cryptodev *dev,
 		}
 
 		/* Submit job for processing */
-		set_cpu_mb_job_params(job, s, sofs, buf, len,
-			vec->iv[i], vec->aad[i], tmp_dgst[i],
-			&vec->status[i]);
+		set_cpu_mb_job_params(job, s, sofs, buf, len, vec->iv[i].va,
+			vec->aad[i].va, tmp_dgst[i], &vec->status[i]);
 		job = submit_sync_job(mb_mgr);
 		j++;
 
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..e1f23d303 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -51,26 +51,44 @@ struct rte_crypto_sgl {
 };
 
 /**
- * Synchronous operation descriptor.
- * Supposed to be used with CPU crypto API call.
+ * Crypto virtual and IOVA address descriptor, used to describe cryptographic
+ * data buffer without the length information. The length information is
+ * normally predefined during session creation.
+ */
+struct rte_crypto_va_iova_ptr {
+	void *va;
+	rte_iova_t iova;
+};
+
+/**
+ * Raw data operation descriptor.
+ * Supposed to be used with synchronous CPU crypto API call or asynchronous
+ * RAW data path API call.
  */
 struct rte_crypto_sym_vec {
+	/** number of operations to perform */
+	uint32_t num;
 	/** array of SGL vectors */
 	struct rte_crypto_sgl *sgl;
-	/** array of pointers to IV */
-	void **iv;
-	/** array of pointers to AAD */
-	void **aad;
+	/** array of pointers to cipher IV */
+	struct rte_crypto_va_iova_ptr *iv;
 	/** array of pointers to digest */
-	void **digest;
+	struct rte_crypto_va_iova_ptr *digest;
+
+	__extension__
+	union {
+		/** array of pointers to auth IV, used for chain operation */
+		struct rte_crypto_va_iova_ptr *auth_iv;
+		/** array of pointers to AAD, used for AEAD operation */
+		struct rte_crypto_va_iova_ptr *aad;
+	};
+
 	/**
 	 * array of statuses for each operation:
-	 *  - 0 on success
-	 *  - errno on error
+	 * - 0 on success
+	 * - errno on error
 	 */
 	int32_t *status;
-	/** number of operations to perform */
-	uint32_t num;
 };
 
 /**
diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c
index 96eec0131..2b1df6a03 100644
--- a/lib/librte_ipsec/esp_inb.c
+++ b/lib/librte_ipsec/esp_inb.c
@@ -693,9 +693,9 @@ cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
 	struct rte_ipsec_sa *sa;
 	struct replay_sqn *rsn;
 	union sym_op_data icv;
-	void *iv[num];
-	void *aad[num];
-	void *dgst[num];
+	struct rte_crypto_va_iova_ptr iv[num];
+	struct rte_crypto_va_iova_ptr aad[num];
+	struct rte_crypto_va_iova_ptr dgst[num];
 	uint32_t dr[num];
 	uint32_t l4ofs[num];
 	uint32_t clen[num];
@@ -720,9 +720,9 @@ cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
 				l4ofs + k, rc, ivbuf[k]);
 
 			/* fill iv, digest and aad */
-			iv[k] = ivbuf[k];
-			aad[k] = icv.va + sa->icv_len;
-			dgst[k++] = icv.va;
+			iv[k].va = ivbuf[k];
+			aad[k].va = icv.va + sa->icv_len;
+			dgst[k++].va = icv.va;
 		} else {
 			dr[i - k] = i;
 			rte_errno = -rc;
diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c
index fb9d5864c..1e181cf2c 100644
--- a/lib/librte_ipsec/esp_outb.c
+++ b/lib/librte_ipsec/esp_outb.c
@@ -449,9 +449,9 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
 	uint32_t i, k, n;
 	uint32_t l2, l3;
 	union sym_op_data icv;
-	void *iv[num];
-	void *aad[num];
-	void *dgst[num];
+	struct rte_crypto_va_iova_ptr iv[num];
+	struct rte_crypto_va_iova_ptr aad[num];
+	struct rte_crypto_va_iova_ptr dgst[num];
 	uint32_t dr[num];
 	uint32_t l4ofs[num];
 	uint32_t clen[num];
@@ -488,9 +488,9 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
 				ivbuf[k]);
 
 			/* fill iv, digest and aad */
-			iv[k] = ivbuf[k];
-			aad[k] = icv.va + sa->icv_len;
-			dgst[k++] = icv.va;
+			iv[k].va = ivbuf[k];
+			aad[k].va = icv.va + sa->icv_len;
+			dgst[k++].va = icv.va;
 		} else {
 			dr[i - k] = i;
 			rte_errno = -rc;
diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h
index 1b543ed87..79b9a2076 100644
--- a/lib/librte_ipsec/misc.h
+++ b/lib/librte_ipsec/misc.h
@@ -112,7 +112,9 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs,
 static inline void
 cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 	union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[],
-	void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[],
+	struct rte_crypto_va_iova_ptr iv[],
+	struct rte_crypto_va_iova_ptr aad[],
+	struct rte_crypto_va_iova_ptr dgst[], uint32_t l4ofs[],
 	uint32_t clen[], uint32_t num)
 {
 	uint32_t i, j, n;
@@ -136,8 +138,8 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 			/* fill the request structure */
 			symvec.sgl = &vecpkt[j];
 			symvec.iv = &iv[j];
-			symvec.aad = &aad[j];
 			symvec.digest = &dgst[j];
+			symvec.aad = &aad[j];
 			symvec.status = &st[j];
 			symvec.num = i - j;
 
-- 
2.20.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH V5 1/2] dpdk: resolve compiling errors for per-queue stats
  2020-10-06  8:33  0%           ` Olivier Matz
@ 2020-10-09 20:32  0%             ` Ferruh Yigit
  2020-10-10  8:09  0%               ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-10-09 20:32 UTC (permalink / raw)
  To: Min Hu (Connor), Thomas Monjalon, Honnappa Nagarahalli
  Cc: Olivier Matz, Stephen Hemminger, techboard, bruce.richardson,
	jerinj, Ray Kinsella, dev

On 10/6/2020 9:33 AM, Olivier Matz wrote:
> Hi,
> 
> On Mon, Oct 05, 2020 at 01:23:08PM +0100, Ferruh Yigit wrote:
>> On 9/28/2020 4:43 PM, Stephen Hemminger wrote:
>>> On Mon, 28 Sep 2020 17:24:26 +0200
>>> Thomas Monjalon <thomas@monjalon.net> wrote:
>>>
>>>> 28/09/2020 15:53, Ferruh Yigit:
>>>>> On 9/28/2020 10:16 AM, Thomas Monjalon wrote:
>>>>>> 28/09/2020 10:59, Ferruh Yigit:
>>>>>>> On 9/27/2020 4:16 AM, Min Hu (Connor) wrote:
>>>>>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>>>>>
>>>>>>>> Currently, only statistics of rx/tx queues with queue_id less than
>>>>>>>> RTE_ETHDEV_QUEUE_STAT_CNTRS can be displayed. If there is a certain
>>>>>>>> application scenario that it needs to use 256 or more than 256 queues
>>>>>>>> and display all statistics of rx/tx queue. At this moment, we have to
>>>>>>>> change the macro to be equaled to the queue number.
>>>>>>>>
>>>>>>>> However, modifying the macro to be greater than 256 will trigger
>>>>>>>> many errors and warnings from test-pmd, PMD drivers and librte_ethdev
>>>>>>>> during compiling dpdk project. But it is possible and permitted that
>>>>>>>> rx/tx queue number is greater than 256 and all statistics of rx/tx
>>>>>>>> queue need to be displayed. In addition, the data type of rx/tx queue
>>>>>>>> number in rte_eth_dev_configure API is 'uint16_t'. So It is unreasonable
>>>>>>>> to use the 'uint8_t' type for variables that control which per-queue
>>>>>>>> statistics can be displayed.
>>>>>>
>>>>>> The explanation is too much complex and misleading.
>>>>>> You mean you cannot increase RTE_ETHDEV_QUEUE_STAT_CNTRS
>>>>>> above 256 because it is an 8-bit type?
>>>>>>
>>>>>> [...]
>>>>>>>> --- a/lib/librte_ethdev/rte_ethdev.h
>>>>>>>> +++ b/lib/librte_ethdev/rte_ethdev.h
>>>>>>>>      int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id,
>>>>>>>> -		uint16_t tx_queue_id, uint8_t stat_idx);
>>>>>>>> +		uint16_t tx_queue_id, uint16_t stat_idx);
>>>>>> [...]
>>>>>>>>      int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
>>>>>>>>      					   uint16_t rx_queue_id,
>>>>>>>> -					   uint8_t stat_idx);
>>>>>>>> +					   uint16_t stat_idx);
>>>>>> [...]
>>>>>>> cc'ed tech-board,
>>>>>>>
>>>>>>> The patch breaks the ethdev ABI without a deprecation notice from previous
>>>>>>> release(s).
>>>>>>>
>>>>>>> It is mainly a fix to the port_id storage type, which we have updated from
>>>>>>> uint8_t to uint16_t in past but some seems remained for
>>>>>>> 'rte_eth_dev_set_tx_queue_stats_mapping()' &
>>>>>>> 'rte_eth_dev_set_rx_queue_stats_mapping()' APIs.
>>>>>>
>>>>>> No, it is not related to the port id, but the number of limited stats.
>>>>>
>>>>> Right, it is not related to the port id, it is fixing the storage type for index
>>>>> used to map the queue stats.
>>>>>>> Since the ethdev library already heavily breaks the ABI this release, I am for
>>>>>>> getting this fix, instead of waiting the fix for one more year.
>>>>>>
>>>>>> If stats can be managed for more than 256 queues, I think it means
>>>>>> it is not limited. In this case, we probably don't need the API
>>>>>> *_queue_stats_mapping which was invented for a limitation of ixgbe.
>>>>>>
>>>>>> The problem is probably somewhere else (in testpmd),
>>>>>> that's why I am against this patch.
>>>>>
>>>>> This patch is not to fix queue stats mapping, I agree there are problems related
>>>>> to it, already shared as comment to this set.
>>>>>
>>>>> But this patch is to fix the build errors when 'RTE_ETHDEV_QUEUE_STAT_CNTRS'
>>>>> needs to set more than 255. Where the build errors seems around the
>>>>> stats_mapping APIs.
>>>>
>>>> It is not said this API is supposed to manage more than 256 queues mapping.
>>>> In general we should not need this API.
>>>> I think it is solving the wrong problem.
>>>
>>>
>>> The original API is a band aid for the limited number of statistics counters
>>> in the Intel IXGBE hardware. It crept into to the DPDK as an API. I would rather
>>> have per-queue statistics and make ixgbe say "not supported"
>>>
>>
>> The current issue is not directly related to '*_queue_stats_mapping' APIs.
>>
>> Problem is not able to set 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255.
>> User may need to set the 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, since it is
>> used to define size of the stats counter.
>> "uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];"
>>
>> When 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, it gives multiple build errors,
>> the one in the ethdev is like [1].
>>
>> This can be fixed two ways,
>> a) increase the size of 'stat_idx' storage type to u16 in the
>> '*_queue_stats_mapping' APIs, this is what this patch does.
>> b) Fix with a casting in the comparison, without changing the APIs.
>>
>> I think both are OK, but is (b) more preferable?
> 
> I think the patch (a) is ok, knowing that RTE_ETHDEV_QUEUE_STAT_CNTRS is
> not modified.
> 
> On the substance, I agree with Thomas that the queue_stats_mapping API
> should be replaced by xstats.
> 

This has been discussed in the last technical board meeting, the decision was to 
use xstats to get queue related statistics [2].

But after second look, even if xstats is used to get statistics, 
'RTE_ETHDEV_QUEUE_STAT_CNTRS' is used, since xstats uses 'rte_eth_stats_get()' 
to get queue statistics.
So for the case device has more than 255 queues, 'RTE_ETHDEV_QUEUE_STAT_CNTRS' 
still needs to be set > 255 which will cause the build error.

I have an AR to send a deprecation notice to current method to get the queue 
statistics, and limit the old method to 256 queues. But since xstats is just a 
wrapped to old method, I am not quite sure how deprecating it will work.

@Thomas, @Honnappa, can you give some more insight on the issue?


[2]
https://mails.dpdk.org/archives/dev/2020-October/185299.html

> 
>>
>>
>> [1]
>> ../lib/librte_ethdev/rte_ethdev.c: In function ‘set_queue_stats_mapping’:
>> ../lib/librte_ethdev/rte_ethdev.c:2943:15: warning: comparison is always
>> false due to limited range of data type [-Wtype-limits]
>>   2943 |  if (stat_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)
>>        |               ^~


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/1] cryptodev: remove v20 ABI compatibility
  2020-10-08  8:32 14%   ` [dpdk-dev] [PATCH v2 1/1] " Adam Dybkowski
@ 2020-10-09 17:41  4%     ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2020-10-09 17:41 UTC (permalink / raw)
  To: Adam Dybkowski, dev
  Cc: fiona.trahe, david.marchand, declan.doherty, Arek Kusztal

> This reverts commit a0f0de06d457753c94688d551a6e8659b4d4e041 as the
> rte_cryptodev_info_get function versioning was a temporary solution
> to maintain ABI compatibility for ChaCha20-Poly1305 and is not
> needed in 20.11.
> 
> Fixes: a0f0de06d457 ("cryptodev: fix ABI compatibility for ChaCha20-Poly1305")
> 
> Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
> Reviewed-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> Acked-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

Applied to dpdk-next-crypto

Thanks.


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
  2020-10-07 17:18  2%     ` [dpdk-dev] [PATCH v5 " Vikas Gupta
  @ 2020-10-09 15:00  0%       ` Akhil Goyal
  1 sibling, 0 replies; 200+ results
From: Akhil Goyal @ 2020-10-09 15:00 UTC (permalink / raw)
  To: Vikas Gupta, dev; +Cc: vikram.prakash

> Hi,
> This patchset contains support for Crypto offload on Broadcom’s
> Stingray/Stingray2 SoCs having FlexSparc unit.
> BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
> 
> The patchset progressively adds major modules as below.
> a) Detection of platform-device based on the known registered platforms and
> attaching with VFIO.
> b) Creation of Cryptodevice.
> c) Addition of session handling.
> d) Add Cryptodevice into test Cryptodev framework.
> 
> The patchset has been tested on the above mentioned SoCs.
> 
> Regards,
> Vikas
> 
> Changes from v0->v1:
>       Updated the ABI version in
> file .../crypto/bcmfs/rte_pmd_bcmfs_version.map
> 
> Changes from v1->v2:
> 	- Fix compilation errors and coding style warnings.
> 	- Use global test crypto suite suggested by Adam Dybkowski
> 
> Changes from v2->v3:
> 	- Release notes updated.
> 	- bcmfs.rst updated with missing information about installation.
> 	- Review comments from patch1 from v2 addressed.
> 	- Updated description about dependency of PMD driver on
> VFIO_PRESENT.
> 	- Fixed typo in bcmfs_hw_defs.h (comments on patch3 from v2
> addressed)
> 	- Comments on patch6 from v2 addressed and capability list is fixed.
> 		Removed redundant enums and macros from the file
> 		bcmfs_sym_defs.h and updated other impacted APIs
> accordingly.
> 		patch7 too is updated due to removal of redundancy.
> 	  Thanks! to Akhil for pointing out the redundancy.
> 	- Fix minor code style issues in few files as part of review.
> 
> Changes from v3->v4:
> 	- Code style issues fixed.
> 	- Change of barrier API in bcmfs4_rm.c and bcmfs5_rm.c
> 
> Changes from v4->v5:
> 	- Change of barrier API in bcmfs4_rm.c. Missed one in v4
> 

Series applied to dpdk-next-crypto With 2 fixes as mentioned in the last patch.

Thanks.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v9 8/8] sched: remove redundant code
  2020-10-09 12:39  3%   ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Savinay Dharmappa
  2020-10-09 12:39  4%     ` [dpdk-dev] [PATCH v9 1/8] sched: add support profile table Savinay Dharmappa
  2020-10-09 12:39  2%     ` [dpdk-dev] [PATCH v9 3/8] sched: update subport rate dynamically Savinay Dharmappa
@ 2020-10-09 12:39  5%     ` Savinay Dharmappa
  2020-10-11 20:11  0%     ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Thomas Monjalon
  3 siblings, 0 replies; 200+ results
From: Savinay Dharmappa @ 2020-10-09 12:39 UTC (permalink / raw)
  To: cristian.dumitrescu, jasvinder.singh, dev; +Cc: savinay.dharmappa

Remove redundant data structure fields.

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |  3 +++
 lib/librte_sched/rte_sched.h           | 12 ------------
 2 files changed, 3 insertions(+), 12 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 85d56d46c..116969d06 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -296,6 +296,9 @@ ABI Changes
   * Added ``subport_profile_id`` as a argument to function
     ``rte_sched_subport_config``.
 
+  * ``tb_rate``, ``tc_rate``, ``tc_period`` and
+    ``tb_size`` are removed from ``struct rte_sched_subport_params``.
+
 Known Issues
 ------------
 
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 1506c6487..c1a772b70 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -149,18 +149,6 @@ struct rte_sched_pipe_params {
  * byte.
  */
 struct rte_sched_subport_params {
-	/** Token bucket rate (measured in bytes per second) */
-	uint64_t tb_rate;
-
-	/** Token bucket size (measured in credits) */
-	uint64_t tb_size;
-
-	/** Traffic class rates (measured in bytes per second) */
-	uint64_t tc_rate[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
-
-	/** Enforcement period for rates (measured in milliseconds) */
-	uint64_t tc_period;
-
 	/** Number of subport pipes.
 	 * The subport can enable/allocate fewer pipes than the maximum
 	 * number set through struct port_params::n_max_pipes_per_subport,
-- 
2.17.1


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v9 3/8] sched: update subport rate dynamically
  2020-10-09 12:39  3%   ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Savinay Dharmappa
  2020-10-09 12:39  4%     ` [dpdk-dev] [PATCH v9 1/8] sched: add support profile table Savinay Dharmappa
@ 2020-10-09 12:39  2%     ` Savinay Dharmappa
  2020-10-09 12:39  5%     ` [dpdk-dev] [PATCH v9 8/8] sched: remove redundant code Savinay Dharmappa
  2020-10-11 20:11  0%     ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Thomas Monjalon
  3 siblings, 0 replies; 200+ results
From: Savinay Dharmappa @ 2020-10-09 12:39 UTC (permalink / raw)
  To: cristian.dumitrescu, jasvinder.singh, dev; +Cc: savinay.dharmappa

Add support to update subport rate dynamically.

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
---
 app/test/test_sched.c                    |   2 +-
 doc/guides/rel_notes/deprecation.rst     |   6 -
 doc/guides/rel_notes/release_20_11.rst   |   9 +
 drivers/net/softnic/rte_eth_softnic_tm.c |   6 +-
 examples/ip_pipeline/tmgr.c              |   6 +-
 examples/qos_sched/init.c                |   3 +-
 lib/librte_sched/rte_sched.c             | 415 ++++++++++-------------
 lib/librte_sched/rte_sched.h             |  13 +-
 8 files changed, 213 insertions(+), 247 deletions(-)

diff --git a/app/test/test_sched.c b/app/test/test_sched.c
index fc31080ef..5e5c2a59b 100644
--- a/app/test/test_sched.c
+++ b/app/test/test_sched.c
@@ -138,7 +138,7 @@ test_sched(void)
 	port = rte_sched_port_config(&port_param);
 	TEST_ASSERT_NOT_NULL(port, "Error config sched port\n");
 
-	err = rte_sched_subport_config(port, SUBPORT, subport_param);
+	err = rte_sched_subport_config(port, SUBPORT, subport_param, 0);
 	TEST_ASSERT_SUCCESS(err, "Error config sched, err=%d\n", err);
 
 	for (pipe = 0; pipe < subport_param[0].n_pipes_per_subport_enabled; pipe++) {
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 584e72087..f7363a585 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -212,12 +212,6 @@ Deprecation Notices
   in "rte_sched.h". These changes are aligned to improvements suggested in the
   RFC https://mails.dpdk.org/archives/dev/2018-November/120035.html.
 
-* sched: To allow dynamic configuration of the subport bandwidth profile,
-  changes will be made to data structures ``rte_sched_subport_params``,
-  ``rte_sched_port_params`` and new data structure, API functions will be
-  defined in ``rte_sched.h``. These changes are aligned as suggested in the
-  RFC https://mails.dpdk.org/archives/dev/2020-July/175161.html
-
 * metrics: The function ``rte_metrics_init`` will have a non-void return
   in order to notify errors instead of calling ``rte_exit``.
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6968c27f6..85d56d46c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -136,6 +136,12 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Added support to update subport bandwidth dynamically.**
+
+   * Added new API ``rte_sched_port_subport_profile_add`` to add new
+     subport bandwidth profile to subport porfile table at runtime.
+
+   * Added support to update subport rate dynamically.
 
 Removed Items
 -------------
@@ -287,6 +293,9 @@ ABI Changes
 
   * Added new fields to ``struct rte_sched_subport_port_params``.
 
+  * Added ``subport_profile_id`` as a argument to function
+    ``rte_sched_subport_config``.
+
 Known Issues
 ------------
 
diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c
index d30976378..5199dd2cd 100644
--- a/drivers/net/softnic/rte_eth_softnic_tm.c
+++ b/drivers/net/softnic/rte_eth_softnic_tm.c
@@ -92,7 +92,7 @@ softnic_tmgr_port_create(struct pmd_internals *p,
 
 		status = rte_sched_subport_config(sched,
 			subport_id,
-			&t->subport_params[subport_id]);
+			&t->subport_params[subport_id], 0);
 		if (status) {
 			rte_sched_port_free(sched);
 			return NULL;
@@ -1141,7 +1141,7 @@ update_subport_tc_rate(struct rte_eth_dev *dev,
 
 	/* Update the subport configuration. */
 	if (rte_sched_subport_config(SCHED(p),
-		subport_id, &subport_params))
+		subport_id, &subport_params, 0))
 		return -1;
 
 	/* Commit changes. */
@@ -2912,7 +2912,7 @@ update_subport_rate(struct rte_eth_dev *dev,
 
 	/* Update the subport configuration. */
 	if (rte_sched_subport_config(SCHED(p), subport_id,
-		&subport_params))
+		&subport_params, 0))
 		return -1;
 
 	/* Commit changes. */
diff --git a/examples/ip_pipeline/tmgr.c b/examples/ip_pipeline/tmgr.c
index 91ccbf60f..46c6a83a4 100644
--- a/examples/ip_pipeline/tmgr.c
+++ b/examples/ip_pipeline/tmgr.c
@@ -119,7 +119,8 @@ tmgr_port_create(const char *name, struct tmgr_port_params *params)
 		status = rte_sched_subport_config(
 			s,
 			i,
-			&subport_profile[0]);
+			&subport_profile[0],
+			0);
 
 		if (status) {
 			rte_sched_port_free(s);
@@ -180,7 +181,8 @@ tmgr_subport_config(const char *port_name,
 	status = rte_sched_subport_config(
 		port->s,
 		subport_id,
-		&subport_profile[subport_profile_id]);
+		&subport_profile[subport_profile_id],
+		0);
 
 	return status;
 }
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 06328ddb2..b188c624b 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -314,7 +314,8 @@ app_init_sched_port(uint32_t portid, uint32_t socketid)
 	}
 
 	for (subport = 0; subport < port_params.n_subports_per_port; subport ++) {
-		err = rte_sched_subport_config(port, subport, &subport_params[subport]);
+		err = rte_sched_subport_config(port, subport,
+				&subport_params[subport], 0);
 		if (err) {
 			rte_exit(EXIT_FAILURE, "Unable to config sched subport %u, err=%d\n",
 					subport, err);
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 895b40d72..7c5688068 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -123,6 +123,7 @@ struct rte_sched_grinder {
 	uint32_t productive;
 	uint32_t pindex;
 	struct rte_sched_subport *subport;
+	struct rte_sched_subport_profile *subport_params;
 	struct rte_sched_pipe *pipe;
 	struct rte_sched_pipe_profile *pipe_params;
 
@@ -151,16 +152,11 @@ struct rte_sched_grinder {
 struct rte_sched_subport {
 	/* Token bucket (TB) */
 	uint64_t tb_time; /* time of last update */
-	uint64_t tb_period;
-	uint64_t tb_credits_per_period;
-	uint64_t tb_size;
 	uint64_t tb_credits;
 
 	/* Traffic classes (TCs) */
 	uint64_t tc_time; /* time of next update */
-	uint64_t tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
 	uint64_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
-	uint64_t tc_period;
 
 	/* TC oversubscription */
 	uint64_t tc_ov_wm;
@@ -174,6 +170,8 @@ struct rte_sched_subport {
 	/* Statistics */
 	struct rte_sched_subport_stats stats __rte_cache_aligned;
 
+	/* subport profile */
+	uint32_t profile;
 	/* Subport pipes */
 	uint32_t n_pipes_per_subport_enabled;
 	uint32_t n_pipe_profiles;
@@ -834,18 +832,6 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
 		return -EINVAL;
 	}
 
-	if (params->tb_rate == 0 || params->tb_rate > rate) {
-		RTE_LOG(ERR, SCHED,
-			"%s: Incorrect value for tb rate\n", __func__);
-		return -EINVAL;
-	}
-
-	if (params->tb_size == 0) {
-		RTE_LOG(ERR, SCHED,
-			"%s: Incorrect value for tb size\n", __func__);
-		return -EINVAL;
-	}
-
 	/* qsize: if non-zero, power of 2,
 	 * no bigger than 32K (due to 16-bit read/write pointers)
 	 */
@@ -859,29 +845,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
 		}
 	}
 
-	for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
-		uint64_t tc_rate = params->tc_rate[i];
-		uint16_t qsize = params->qsize[i];
-
-		if ((qsize == 0 && tc_rate != 0) ||
-			(qsize != 0 && tc_rate == 0) ||
-			(tc_rate > params->tb_rate)) {
-			RTE_LOG(ERR, SCHED,
-				"%s: Incorrect value for tc rate\n", __func__);
-			return -EINVAL;
-		}
-	}
-
-	if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0 ||
-		params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) {
-		RTE_LOG(ERR, SCHED,
-			"%s: Incorrect qsize or tc rate(best effort)\n", __func__);
-		return -EINVAL;
-	}
-
-	if (params->tc_period == 0) {
-		RTE_LOG(ERR, SCHED,
-			"%s: Incorrect value for tc period\n", __func__);
+	if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) {
+		RTE_LOG(ERR, SCHED, "%s: Incorrect qsize\n", __func__);
 		return -EINVAL;
 	}
 
@@ -1098,48 +1063,6 @@ rte_sched_port_free(struct rte_sched_port *port)
 	rte_free(port);
 }
 
-static void
-rte_sched_port_log_subport_config(struct rte_sched_port *port, uint32_t i)
-{
-	struct rte_sched_subport *s = port->subports[i];
-
-	RTE_LOG(DEBUG, SCHED, "Low level config for subport %u:\n"
-		"	Token bucket: period = %"PRIu64", credits per period = %"PRIu64
-		", size = %"PRIu64"\n"
-		"	Traffic classes: period = %"PRIu64"\n"
-		"	credits per period = [%"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64
-		", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64
-		", %"PRIu64", %"PRIu64", %"PRIu64"]\n"
-		"	Best effort traffic class oversubscription: wm min = %"PRIu64
-		", wm max = %"PRIu64"\n",
-		i,
-
-		/* Token bucket */
-		s->tb_period,
-		s->tb_credits_per_period,
-		s->tb_size,
-
-		/* Traffic classes */
-		s->tc_period,
-		s->tc_credits_per_period[0],
-		s->tc_credits_per_period[1],
-		s->tc_credits_per_period[2],
-		s->tc_credits_per_period[3],
-		s->tc_credits_per_period[4],
-		s->tc_credits_per_period[5],
-		s->tc_credits_per_period[6],
-		s->tc_credits_per_period[7],
-		s->tc_credits_per_period[8],
-		s->tc_credits_per_period[9],
-		s->tc_credits_per_period[10],
-		s->tc_credits_per_period[11],
-		s->tc_credits_per_period[12],
-
-		/* Best effort traffic class oversubscription */
-		s->tc_ov_wm_min,
-		s->tc_ov_wm_max);
-}
-
 static void
 rte_sched_free_memory(struct rte_sched_port *port, uint32_t n_subports)
 {
@@ -1158,10 +1081,12 @@ rte_sched_free_memory(struct rte_sched_port *port, uint32_t n_subports)
 int
 rte_sched_subport_config(struct rte_sched_port *port,
 	uint32_t subport_id,
-	struct rte_sched_subport_params *params)
+	struct rte_sched_subport_params *params,
+	uint32_t subport_profile_id)
 {
 	struct rte_sched_subport *s = NULL;
 	uint32_t n_subports = subport_id;
+	struct rte_sched_subport_profile *profile;
 	uint32_t n_subport_pipe_queues, i;
 	uint32_t size0, size1, bmp_mem_size;
 	int status;
@@ -1181,165 +1106,183 @@ rte_sched_subport_config(struct rte_sched_port *port,
 		return -EINVAL;
 	}
 
-	status = rte_sched_subport_check_params(params,
-		port->n_pipes_per_subport,
-		port->rate);
-	if (status != 0) {
-		RTE_LOG(NOTICE, SCHED,
-			"%s: Port scheduler params check failed (%d)\n",
-			__func__, status);
-
+	if (subport_profile_id >= port->n_max_subport_profiles) {
+		RTE_LOG(ERR, SCHED, "%s: "
+			"Number of subport profile exceeds the max limit\n",
+			__func__);
 		rte_sched_free_memory(port, n_subports);
 		return -EINVAL;
 	}
 
-	/* Determine the amount of memory to allocate */
-	size0 = sizeof(struct rte_sched_subport);
-	size1 = rte_sched_subport_get_array_base(params,
-				e_RTE_SCHED_SUBPORT_ARRAY_TOTAL);
+	/** Memory is allocated only on first invocation of the api for a
+	 * given subport. Subsequent invocation on same subport will just
+	 * update subport bandwidth parameter.
+	 **/
+	if (port->subports[subport_id] == NULL) {
 
-	/* Allocate memory to store the data structures */
-	s = rte_zmalloc_socket("subport_params", size0 + size1,
-		RTE_CACHE_LINE_SIZE, port->socket);
-	if (s == NULL) {
-		RTE_LOG(ERR, SCHED,
-			"%s: Memory allocation fails\n", __func__);
+		status = rte_sched_subport_check_params(params,
+			port->n_pipes_per_subport,
+			port->rate);
+		if (status != 0) {
+			RTE_LOG(NOTICE, SCHED,
+				"%s: Port scheduler params check failed (%d)\n",
+				__func__, status);
 
-		rte_sched_free_memory(port, n_subports);
-		return -ENOMEM;
-	}
+			rte_sched_free_memory(port, n_subports);
+			return -EINVAL;
+		}
 
-	n_subports++;
+		/* Determine the amount of memory to allocate */
+		size0 = sizeof(struct rte_sched_subport);
+		size1 = rte_sched_subport_get_array_base(params,
+					e_RTE_SCHED_SUBPORT_ARRAY_TOTAL);
 
-	/* Port */
-	port->subports[subport_id] = s;
+		/* Allocate memory to store the data structures */
+		s = rte_zmalloc_socket("subport_params", size0 + size1,
+			RTE_CACHE_LINE_SIZE, port->socket);
+		if (s == NULL) {
+			RTE_LOG(ERR, SCHED,
+				"%s: Memory allocation fails\n", __func__);
 
-	/* Token Bucket (TB) */
-	if (params->tb_rate == port->rate) {
-		s->tb_credits_per_period = 1;
-		s->tb_period = 1;
-	} else {
-		double tb_rate = ((double) params->tb_rate) / ((double) port->rate);
-		double d = RTE_SCHED_TB_RATE_CONFIG_ERR;
+			rte_sched_free_memory(port, n_subports);
+			return -ENOMEM;
+		}
 
-		rte_approx_64(tb_rate, d, &s->tb_credits_per_period, &s->tb_period);
-	}
+		n_subports++;
 
-	s->tb_size = params->tb_size;
-	s->tb_time = port->time;
-	s->tb_credits = s->tb_size / 2;
+		subport_profile_id = 0;
 
-	/* Traffic Classes (TCs) */
-	s->tc_period = rte_sched_time_ms_to_bytes(params->tc_period, port->rate);
-	for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
-		if (params->qsize[i])
-			s->tc_credits_per_period[i]
-				= rte_sched_time_ms_to_bytes(params->tc_period,
-					params->tc_rate[i]);
-	}
-	s->tc_time = port->time + s->tc_period;
-	for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
-		if (params->qsize[i])
-			s->tc_credits[i] = s->tc_credits_per_period[i];
+		/* Port */
+		port->subports[subport_id] = s;
 
-	/* compile time checks */
-	RTE_BUILD_BUG_ON(RTE_SCHED_PORT_N_GRINDERS == 0);
-	RTE_BUILD_BUG_ON(RTE_SCHED_PORT_N_GRINDERS &
-		(RTE_SCHED_PORT_N_GRINDERS - 1));
+		s->tb_time = port->time;
 
-	/* User parameters */
-	s->n_pipes_per_subport_enabled = params->n_pipes_per_subport_enabled;
-	memcpy(s->qsize, params->qsize, sizeof(params->qsize));
-	s->n_pipe_profiles = params->n_pipe_profiles;
-	s->n_max_pipe_profiles = params->n_max_pipe_profiles;
+		/* compile time checks */
+		RTE_BUILD_BUG_ON(RTE_SCHED_PORT_N_GRINDERS == 0);
+		RTE_BUILD_BUG_ON(RTE_SCHED_PORT_N_GRINDERS &
+			(RTE_SCHED_PORT_N_GRINDERS - 1));
+
+		/* User parameters */
+		s->n_pipes_per_subport_enabled =
+				params->n_pipes_per_subport_enabled;
+		memcpy(s->qsize, params->qsize, sizeof(params->qsize));
+		s->n_pipe_profiles = params->n_pipe_profiles;
+		s->n_max_pipe_profiles = params->n_max_pipe_profiles;
 
 #ifdef RTE_SCHED_RED
-	for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
-		uint32_t j;
+		for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
+			uint32_t j;
 
-		for (j = 0; j < RTE_COLORS; j++) {
+			for (j = 0; j < RTE_COLORS; j++) {
 			/* if min/max are both zero, then RED is disabled */
-			if ((params->red_params[i][j].min_th |
-			     params->red_params[i][j].max_th) == 0) {
-				continue;
+				if ((params->red_params[i][j].min_th |
+				     params->red_params[i][j].max_th) == 0) {
+					continue;
+				}
+
+				if (rte_red_config_init(&s->red_config[i][j],
+				    params->red_params[i][j].wq_log2,
+				    params->red_params[i][j].min_th,
+				    params->red_params[i][j].max_th,
+				    params->red_params[i][j].maxp_inv) != 0) {
+					rte_sched_free_memory(port, n_subports);
+
+					RTE_LOG(NOTICE, SCHED,
+					"%s: RED configuration init fails\n",
+					__func__);
+					return -EINVAL;
+				}
 			}
+		}
+#endif
 
-			if (rte_red_config_init(&s->red_config[i][j],
-				params->red_params[i][j].wq_log2,
-				params->red_params[i][j].min_th,
-				params->red_params[i][j].max_th,
-				params->red_params[i][j].maxp_inv) != 0) {
-				rte_sched_free_memory(port, n_subports);
+		/* Scheduling loop detection */
+		s->pipe_loop = RTE_SCHED_PIPE_INVALID;
+		s->pipe_exhaustion = 0;
+
+		/* Grinders */
+		s->busy_grinders = 0;
+
+		/* Queue base calculation */
+		rte_sched_subport_config_qsize(s);
+
+		/* Large data structures */
+		s->pipe = (struct rte_sched_pipe *)
+			(s->memory + rte_sched_subport_get_array_base(params,
+			e_RTE_SCHED_SUBPORT_ARRAY_PIPE));
+		s->queue = (struct rte_sched_queue *)
+			(s->memory + rte_sched_subport_get_array_base(params,
+			e_RTE_SCHED_SUBPORT_ARRAY_QUEUE));
+		s->queue_extra = (struct rte_sched_queue_extra *)
+			(s->memory + rte_sched_subport_get_array_base(params,
+			e_RTE_SCHED_SUBPORT_ARRAY_QUEUE_EXTRA));
+		s->pipe_profiles = (struct rte_sched_pipe_profile *)
+			(s->memory + rte_sched_subport_get_array_base(params,
+			e_RTE_SCHED_SUBPORT_ARRAY_PIPE_PROFILES));
+		s->bmp_array =  s->memory + rte_sched_subport_get_array_base(
+				params, e_RTE_SCHED_SUBPORT_ARRAY_BMP_ARRAY);
+		s->queue_array = (struct rte_mbuf **)
+			(s->memory + rte_sched_subport_get_array_base(params,
+			e_RTE_SCHED_SUBPORT_ARRAY_QUEUE_ARRAY));
+
+		/* Pipe profile table */
+		rte_sched_subport_config_pipe_profile_table(s, params,
+							    port->rate);
+
+		/* Bitmap */
+		n_subport_pipe_queues = rte_sched_subport_pipe_queues(s);
+		bmp_mem_size = rte_bitmap_get_memory_footprint(
+						n_subport_pipe_queues);
+		s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array,
+					bmp_mem_size);
+		if (s->bmp == NULL) {
+			RTE_LOG(ERR, SCHED,
+				"%s: Subport bitmap init error\n", __func__);
 
-				RTE_LOG(NOTICE, SCHED,
-				"%s: RED configuration init fails\n", __func__);
-				return -EINVAL;
-			}
+			rte_sched_free_memory(port, n_subports);
+			return -EINVAL;
 		}
-	}
-#endif
 
-	/* Scheduling loop detection */
-	s->pipe_loop = RTE_SCHED_PIPE_INVALID;
-	s->pipe_exhaustion = 0;
+		for (i = 0; i < RTE_SCHED_PORT_N_GRINDERS; i++)
+			s->grinder_base_bmp_pos[i] = RTE_SCHED_PIPE_INVALID;
 
-	/* Grinders */
-	s->busy_grinders = 0;
+#ifdef RTE_SCHED_SUBPORT_TC_OV
+		/* TC oversubscription */
+		s->tc_ov_wm_min = port->mtu;
+		s->tc_ov_wm = s->tc_ov_wm_max;
+		s->tc_ov_period_id = 0;
+		s->tc_ov = 0;
+		s->tc_ov_n = 0;
+		s->tc_ov_rate = 0;
+#endif
+	}
 
-	/* Queue base calculation */
-	rte_sched_subport_config_qsize(s);
+	{
+	/* update subport parameters from subport profile table*/
+		profile = port->subport_profiles + subport_profile_id;
 
-	/* Large data structures */
-	s->pipe = (struct rte_sched_pipe *)
-		(s->memory + rte_sched_subport_get_array_base(params,
-		e_RTE_SCHED_SUBPORT_ARRAY_PIPE));
-	s->queue = (struct rte_sched_queue *)
-		(s->memory + rte_sched_subport_get_array_base(params,
-		e_RTE_SCHED_SUBPORT_ARRAY_QUEUE));
-	s->queue_extra = (struct rte_sched_queue_extra *)
-		(s->memory + rte_sched_subport_get_array_base(params,
-		e_RTE_SCHED_SUBPORT_ARRAY_QUEUE_EXTRA));
-	s->pipe_profiles = (struct rte_sched_pipe_profile *)
-		(s->memory + rte_sched_subport_get_array_base(params,
-		e_RTE_SCHED_SUBPORT_ARRAY_PIPE_PROFILES));
-	s->bmp_array =  s->memory + rte_sched_subport_get_array_base(params,
-		e_RTE_SCHED_SUBPORT_ARRAY_BMP_ARRAY);
-	s->queue_array = (struct rte_mbuf **)
-		(s->memory + rte_sched_subport_get_array_base(params,
-		e_RTE_SCHED_SUBPORT_ARRAY_QUEUE_ARRAY));
-
-	/* Pipe profile table */
-	rte_sched_subport_config_pipe_profile_table(s, params, port->rate);
+		s = port->subports[subport_id];
 
-	/* Bitmap */
-	n_subport_pipe_queues = rte_sched_subport_pipe_queues(s);
-	bmp_mem_size = rte_bitmap_get_memory_footprint(n_subport_pipe_queues);
-	s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array,
-				bmp_mem_size);
-	if (s->bmp == NULL) {
-		RTE_LOG(ERR, SCHED,
-			"%s: Subport bitmap init error\n", __func__);
+		s->tb_credits = profile->tb_size / 2;
 
-		rte_sched_free_memory(port, n_subports);
-		return -EINVAL;
-	}
+		s->tc_time = port->time + profile->tc_period;
 
-	for (i = 0; i < RTE_SCHED_PORT_N_GRINDERS; i++)
-		s->grinder_base_bmp_pos[i] = RTE_SCHED_PIPE_INVALID;
+		for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
+			if (s->qsize[i])
+				s->tc_credits[i] =
+					profile->tc_credits_per_period[i];
+			else
+				profile->tc_credits_per_period[i] = 0;
 
 #ifdef RTE_SCHED_SUBPORT_TC_OV
-	/* TC oversubscription */
-	s->tc_ov_wm_min = port->mtu;
-	s->tc_ov_wm_max = rte_sched_time_ms_to_bytes(params->tc_period,
-						     s->pipe_tc_be_rate_max);
-	s->tc_ov_wm = s->tc_ov_wm_max;
-	s->tc_ov_period_id = 0;
-	s->tc_ov = 0;
-	s->tc_ov_n = 0;
-	s->tc_ov_rate = 0;
+		s->tc_ov_wm_max = rte_sched_time_ms_to_bytes(profile->tc_period,
+							s->pipe_tc_be_rate_max);
 #endif
+		s->profile = subport_profile_id;
 
-	rte_sched_port_log_subport_config(port, subport_id);
+	}
+
+	rte_sched_port_log_subport_profile(port, subport_profile_id);
 
 	return 0;
 }
@@ -1351,6 +1294,7 @@ rte_sched_pipe_config(struct rte_sched_port *port,
 	int32_t pipe_profile)
 {
 	struct rte_sched_subport *s;
+	struct rte_sched_subport_profile *sp;
 	struct rte_sched_pipe *p;
 	struct rte_sched_pipe_profile *params;
 	uint32_t n_subports = subport_id + 1;
@@ -1391,14 +1335,15 @@ rte_sched_pipe_config(struct rte_sched_port *port,
 		return -EINVAL;
 	}
 
+	sp = port->subport_profiles + s->profile;
 	/* Handle the case when pipe already has a valid configuration */
 	p = s->pipe + pipe_id;
 	if (p->tb_time) {
 		params = s->pipe_profiles + p->profile;
 
 		double subport_tc_be_rate =
-			(double) s->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE]
-			/ (double) s->tc_period;
+		(double)sp->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE]
+			/ (double) sp->tc_period;
 		double pipe_tc_be_rate =
 			(double) params->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE]
 			/ (double) params->tc_period;
@@ -1440,8 +1385,8 @@ rte_sched_pipe_config(struct rte_sched_port *port,
 	{
 		/* Subport best effort tc oversubscription */
 		double subport_tc_be_rate =
-			(double) s->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE]
-			/ (double) s->tc_period;
+		(double)sp->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE]
+			/ (double) sp->tc_period;
 		double pipe_tc_be_rate =
 			(double) params->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE]
 			/ (double) params->tc_period;
@@ -2229,14 +2174,15 @@ grinder_credits_update(struct rte_sched_port *port,
 	struct rte_sched_grinder *grinder = subport->grinder + pos;
 	struct rte_sched_pipe *pipe = grinder->pipe;
 	struct rte_sched_pipe_profile *params = grinder->pipe_params;
+	struct rte_sched_subport_profile *sp = grinder->subport_params;
 	uint64_t n_periods;
 	uint32_t i;
 
 	/* Subport TB */
-	n_periods = (port->time - subport->tb_time) / subport->tb_period;
-	subport->tb_credits += n_periods * subport->tb_credits_per_period;
-	subport->tb_credits = RTE_MIN(subport->tb_credits, subport->tb_size);
-	subport->tb_time += n_periods * subport->tb_period;
+	n_periods = (port->time - subport->tb_time) / sp->tb_period;
+	subport->tb_credits += n_periods * sp->tb_credits_per_period;
+	subport->tb_credits = RTE_MIN(subport->tb_credits, sp->tb_size);
+	subport->tb_time += n_periods * sp->tb_period;
 
 	/* Pipe TB */
 	n_periods = (port->time - pipe->tb_time) / params->tb_period;
@@ -2247,9 +2193,9 @@ grinder_credits_update(struct rte_sched_port *port,
 	/* Subport TCs */
 	if (unlikely(port->time >= subport->tc_time)) {
 		for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
-			subport->tc_credits[i] = subport->tc_credits_per_period[i];
+			subport->tc_credits[i] = sp->tc_credits_per_period[i];
 
-		subport->tc_time = port->time + subport->tc_period;
+		subport->tc_time = port->time + sp->tc_period;
 	}
 
 	/* Pipe TCs */
@@ -2265,8 +2211,10 @@ grinder_credits_update(struct rte_sched_port *port,
 
 static inline uint64_t
 grinder_tc_ov_credits_update(struct rte_sched_port *port,
-	struct rte_sched_subport *subport)
+	struct rte_sched_subport *subport, uint32_t pos)
 {
+	struct rte_sched_grinder *grinder = subport->grinder + pos;
+	struct rte_sched_subport_profile *sp = grinder->subport_params;
 	uint64_t tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
 	uint64_t tc_consumption = 0, tc_ov_consumption_max;
 	uint64_t tc_ov_wm = subport->tc_ov_wm;
@@ -2276,17 +2224,17 @@ grinder_tc_ov_credits_update(struct rte_sched_port *port,
 		return subport->tc_ov_wm_max;
 
 	for (i = 0; i < RTE_SCHED_TRAFFIC_CLASS_BE; i++) {
-		tc_ov_consumption[i] =
-			subport->tc_credits_per_period[i] - subport->tc_credits[i];
+		tc_ov_consumption[i] = sp->tc_credits_per_period[i]
+					-  subport->tc_credits[i];
 		tc_consumption += tc_ov_consumption[i];
 	}
 
 	tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASS_BE] =
-		subport->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] -
+	sp->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] -
 		subport->tc_credits[RTE_SCHED_TRAFFIC_CLASS_BE];
 
 	tc_ov_consumption_max =
-		subport->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] -
+	sp->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] -
 			tc_consumption;
 
 	if (tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASS_BE] >
@@ -2312,14 +2260,15 @@ grinder_credits_update(struct rte_sched_port *port,
 	struct rte_sched_grinder *grinder = subport->grinder + pos;
 	struct rte_sched_pipe *pipe = grinder->pipe;
 	struct rte_sched_pipe_profile *params = grinder->pipe_params;
+	struct rte_sched_subport_profile *sp = grinder->subport_params;
 	uint64_t n_periods;
 	uint32_t i;
 
 	/* Subport TB */
-	n_periods = (port->time - subport->tb_time) / subport->tb_period;
-	subport->tb_credits += n_periods * subport->tb_credits_per_period;
-	subport->tb_credits = RTE_MIN(subport->tb_credits, subport->tb_size);
-	subport->tb_time += n_periods * subport->tb_period;
+	n_periods = (port->time - subport->tb_time) / sp->tb_period;
+	subport->tb_credits += n_periods * sp->tb_credits_per_period;
+	subport->tb_credits = RTE_MIN(subport->tb_credits, sp->tb_size);
+	subport->tb_time += n_periods * sp->tb_period;
 
 	/* Pipe TB */
 	n_periods = (port->time - pipe->tb_time) / params->tb_period;
@@ -2329,12 +2278,13 @@ grinder_credits_update(struct rte_sched_port *port,
 
 	/* Subport TCs */
 	if (unlikely(port->time >= subport->tc_time)) {
-		subport->tc_ov_wm = grinder_tc_ov_credits_update(port, subport);
+		subport->tc_ov_wm =
+			grinder_tc_ov_credits_update(port, subport, pos);
 
 		for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
-			subport->tc_credits[i] = subport->tc_credits_per_period[i];
+			subport->tc_credits[i] = sp->tc_credits_per_period[i];
 
-		subport->tc_time = port->time + subport->tc_period;
+		subport->tc_time = port->time + sp->tc_period;
 		subport->tc_ov_period_id++;
 	}
 
@@ -2857,6 +2807,9 @@ grinder_handle(struct rte_sched_port *port,
 		struct rte_sched_pipe *pipe = grinder->pipe;
 
 		grinder->pipe_params = subport->pipe_profiles + pipe->profile;
+		grinder->subport_params = port->subport_profiles +
+						subport->profile;
+
 		grinder_prefetch_tc_queue_arrays(subport, pos);
 		grinder_credits_update(port, subport, pos);
 
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index aede2e986..1506c6487 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -361,20 +361,27 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port,
 
 /**
  * Hierarchical scheduler subport configuration
- *
+ * Note that this function is safe to use at runtime
+ * to configure subport bandwidth profile.
  * @param port
  *   Handle to port scheduler instance
  * @param subport_id
  *   Subport ID
  * @param params
- *   Subport configuration parameters
+ *   Subport configuration parameters. Must be non-NULL
+ *   for first invocation (i.e initialization) for a given
+ *   subport. Ignored (recommended value is NULL) for all
+ *   subsequent invocation on the same subport.
+ * @param subport_profile_id
+ *   ID of subport bandwidth profile
  * @return
  *   0 upon success, error code otherwise
  */
 int
 rte_sched_subport_config(struct rte_sched_port *port,
 	uint32_t subport_id,
-	struct rte_sched_subport_params *params);
+	struct rte_sched_subport_params *params,
+	uint32_t subport_profile_id);
 
 /**
  * Hierarchical scheduler pipe configuration
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v9 1/8] sched: add support profile table
  2020-10-09 12:39  3%   ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Savinay Dharmappa
@ 2020-10-09 12:39  4%     ` Savinay Dharmappa
  2020-10-09 12:39  2%     ` [dpdk-dev] [PATCH v9 3/8] sched: update subport rate dynamically Savinay Dharmappa
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 200+ results
From: Savinay Dharmappa @ 2020-10-09 12:39 UTC (permalink / raw)
  To: cristian.dumitrescu, jasvinder.singh, dev; +Cc: savinay.dharmappa

Add subport profile table to internal port data structure
and update the port config function.

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_sched/rte_sched.c           | 197 ++++++++++++++++++++++++-
 lib/librte_sched/rte_sched.h           |  25 ++++
 3 files changed, 222 insertions(+), 3 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 808bdc4e5..6968c27f6 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -283,6 +283,9 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+* ``sched`` changes
+
+  * Added new fields to ``struct rte_sched_subport_port_params``.
 
 Known Issues
 ------------
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 75be8b6bd..a44638f31 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -101,6 +101,16 @@ enum grinder_state {
 	e_GRINDER_READ_MBUF
 };
 
+struct rte_sched_subport_profile {
+	/* Token bucket (TB) */
+	uint64_t tb_period;
+	uint64_t tb_credits_per_period;
+	uint64_t tb_size;
+
+	uint64_t tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+	uint64_t tc_period;
+};
+
 struct rte_sched_grinder {
 	/* Pipe cache */
 	uint16_t pcache_qmask[RTE_SCHED_GRINDER_PCACHE_SIZE];
@@ -212,6 +222,8 @@ struct rte_sched_port {
 	uint16_t pipe_queue[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
 	uint8_t pipe_tc[RTE_SCHED_QUEUES_PER_PIPE];
 	uint8_t tc_queue[RTE_SCHED_QUEUES_PER_PIPE];
+	uint32_t n_subport_profiles;
+	uint32_t n_max_subport_profiles;
 	uint64_t rate;
 	uint32_t mtu;
 	uint32_t frame_overhead;
@@ -230,6 +242,7 @@ struct rte_sched_port {
 	uint32_t subport_id;
 
 	/* Large data structures */
+	struct rte_sched_subport_profile *subport_profiles;
 	struct rte_sched_subport *subports[0] __rte_cache_aligned;
 } __rte_cache_aligned;
 
@@ -375,9 +388,61 @@ pipe_profile_check(struct rte_sched_pipe_params *params,
 	return 0;
 }
 
+static int
+subport_profile_check(struct rte_sched_subport_profile_params *params,
+	uint64_t rate)
+{
+	uint32_t i;
+
+	/* Check user parameters */
+	if (params == NULL) {
+		RTE_LOG(ERR, SCHED, "%s: "
+		"Incorrect value for parameter params\n", __func__);
+		return -EINVAL;
+	}
+
+	if (params->tb_rate == 0 || params->tb_rate > rate) {
+		RTE_LOG(ERR, SCHED, "%s: "
+		"Incorrect value for tb rate\n", __func__);
+		return -EINVAL;
+	}
+
+	if (params->tb_size == 0) {
+		RTE_LOG(ERR, SCHED, "%s: "
+		"Incorrect value for tb size\n", __func__);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
+		uint64_t tc_rate = params->tc_rate[i];
+
+		if (tc_rate == 0 || (tc_rate > params->tb_rate)) {
+			RTE_LOG(ERR, SCHED, "%s: "
+			"Incorrect value for tc rate\n", __func__);
+			return -EINVAL;
+		}
+	}
+
+	if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) {
+		RTE_LOG(ERR, SCHED, "%s: "
+		"Incorrect tc rate(best effort)\n", __func__);
+		return -EINVAL;
+	}
+
+	if (params->tc_period == 0) {
+		RTE_LOG(ERR, SCHED, "%s: "
+		"Incorrect value for tc period\n", __func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int
 rte_sched_port_check_params(struct rte_sched_port_params *params)
 {
+	uint32_t i;
+
 	if (params == NULL) {
 		RTE_LOG(ERR, SCHED,
 			"%s: Incorrect value for parameter params\n", __func__);
@@ -414,6 +479,29 @@ rte_sched_port_check_params(struct rte_sched_port_params *params)
 		return -EINVAL;
 	}
 
+	if (params->subport_profiles == NULL ||
+		params->n_subport_profiles == 0 ||
+		params->n_max_subport_profiles == 0 ||
+		params->n_subport_profiles > params->n_max_subport_profiles) {
+		RTE_LOG(ERR, SCHED,
+		"%s: Incorrect value for subport profiles\n", __func__);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < params->n_subport_profiles; i++) {
+		struct rte_sched_subport_profile_params *p =
+						params->subport_profiles + i;
+		int status;
+
+		status = subport_profile_check(p, params->rate);
+		if (status != 0) {
+			RTE_LOG(ERR, SCHED,
+			"%s: subport profile check failed(%d)\n",
+			__func__, status);
+			return -EINVAL;
+		}
+	}
+
 	/* n_pipes_per_subport: non-zero, power of 2 */
 	if (params->n_pipes_per_subport == 0 ||
 	    !rte_is_power_of_2(params->n_pipes_per_subport)) {
@@ -555,6 +643,42 @@ rte_sched_port_log_pipe_profile(struct rte_sched_subport *subport, uint32_t i)
 		p->wrr_cost[0], p->wrr_cost[1], p->wrr_cost[2], p->wrr_cost[3]);
 }
 
+static void
+rte_sched_port_log_subport_profile(struct rte_sched_port *port, uint32_t i)
+{
+	struct rte_sched_subport_profile *p = port->subport_profiles + i;
+
+	RTE_LOG(DEBUG, SCHED, "Low level config for subport profile %u:\n"
+	"Token bucket: period = %"PRIu64", credits per period = %"PRIu64","
+	"size = %"PRIu64"\n"
+	"Traffic classes: period = %"PRIu64",\n"
+	"credits per period = [%"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64
+	" %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64
+	" %"PRIu64", %"PRIu64", %"PRIu64"]\n",
+	i,
+
+	/* Token bucket */
+	p->tb_period,
+	p->tb_credits_per_period,
+	p->tb_size,
+
+	/* Traffic classes */
+	p->tc_period,
+	p->tc_credits_per_period[0],
+	p->tc_credits_per_period[1],
+	p->tc_credits_per_period[2],
+	p->tc_credits_per_period[3],
+	p->tc_credits_per_period[4],
+	p->tc_credits_per_period[5],
+	p->tc_credits_per_period[6],
+	p->tc_credits_per_period[7],
+	p->tc_credits_per_period[8],
+	p->tc_credits_per_period[9],
+	p->tc_credits_per_period[10],
+	p->tc_credits_per_period[11],
+	p->tc_credits_per_period[12]);
+}
+
 static inline uint64_t
 rte_sched_time_ms_to_bytes(uint64_t time_ms, uint64_t rate)
 {
@@ -623,6 +747,37 @@ rte_sched_pipe_profile_convert(struct rte_sched_subport *subport,
 	dst->wrr_cost[3] = (uint8_t) wrr_cost[3];
 }
 
+static void
+rte_sched_subport_profile_convert(struct rte_sched_subport_profile_params *src,
+	struct rte_sched_subport_profile *dst,
+	uint64_t rate)
+{
+	uint32_t i;
+
+	/* Token Bucket */
+	if (src->tb_rate == rate) {
+		dst->tb_credits_per_period = 1;
+		dst->tb_period = 1;
+	} else {
+		double tb_rate = (double) src->tb_rate
+				/ (double) rate;
+		double d = RTE_SCHED_TB_RATE_CONFIG_ERR;
+
+		rte_approx_64(tb_rate, d, &dst->tb_credits_per_period,
+			&dst->tb_period);
+	}
+
+	dst->tb_size = src->tb_size;
+
+	/* Traffic Classes */
+	dst->tc_period = rte_sched_time_ms_to_bytes(src->tc_period, rate);
+
+	for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
+		dst->tc_credits_per_period[i]
+			= rte_sched_time_ms_to_bytes(src->tc_period,
+				src->tc_rate[i]);
+}
+
 static void
 rte_sched_subport_config_pipe_profile_table(struct rte_sched_subport *subport,
 	struct rte_sched_subport_params *params, uint64_t rate)
@@ -647,6 +802,24 @@ rte_sched_subport_config_pipe_profile_table(struct rte_sched_subport *subport,
 	}
 }
 
+static void
+rte_sched_port_config_subport_profile_table(struct rte_sched_port *port,
+	struct rte_sched_port_params *params,
+	uint64_t rate)
+{
+	uint32_t i;
+
+	for (i = 0; i < port->n_subport_profiles; i++) {
+		struct rte_sched_subport_profile_params *src
+				= params->subport_profiles + i;
+		struct rte_sched_subport_profile *dst
+				= port->subport_profiles + i;
+
+		rte_sched_subport_profile_convert(src, dst, rate);
+		rte_sched_port_log_subport_profile(port, i);
+	}
+}
+
 static int
 rte_sched_subport_check_params(struct rte_sched_subport_params *params,
 	uint32_t n_max_pipes_per_subport,
@@ -793,7 +966,7 @@ struct rte_sched_port *
 rte_sched_port_config(struct rte_sched_port_params *params)
 {
 	struct rte_sched_port *port = NULL;
-	uint32_t size0, size1;
+	uint32_t size0, size1, size2;
 	uint32_t cycles_per_byte;
 	uint32_t i, j;
 	int status;
@@ -808,10 +981,21 @@ rte_sched_port_config(struct rte_sched_port_params *params)
 
 	size0 = sizeof(struct rte_sched_port);
 	size1 = params->n_subports_per_port * sizeof(struct rte_sched_subport *);
+	size2 = params->n_max_subport_profiles *
+		sizeof(struct rte_sched_subport_profile);
 
 	/* Allocate memory to store the data structures */
-	port = rte_zmalloc_socket("qos_params", size0 + size1, RTE_CACHE_LINE_SIZE,
-		params->socket);
+	port = rte_zmalloc_socket("qos_params", size0 + size1,
+				 RTE_CACHE_LINE_SIZE, params->socket);
+	if (port == NULL) {
+		RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__);
+
+		return NULL;
+	}
+
+	/* Allocate memory to store the subport profile */
+	port->subport_profiles  = rte_zmalloc_socket("subport_profile", size2,
+					RTE_CACHE_LINE_SIZE, params->socket);
 	if (port == NULL) {
 		RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__);
 
@@ -820,6 +1004,8 @@ rte_sched_port_config(struct rte_sched_port_params *params)
 
 	/* User parameters */
 	port->n_subports_per_port = params->n_subports_per_port;
+	port->n_subport_profiles = params->n_subport_profiles;
+	port->n_max_subport_profiles = params->n_max_subport_profiles;
 	port->n_pipes_per_subport = params->n_pipes_per_subport;
 	port->n_pipes_per_subport_log2 =
 			__builtin_ctz(params->n_pipes_per_subport);
@@ -850,6 +1036,9 @@ rte_sched_port_config(struct rte_sched_port_params *params)
 	port->time_cpu_bytes = 0;
 	port->time = 0;
 
+	/* Subport profile table */
+	rte_sched_port_config_subport_profile_table(port, params, port->rate);
+
 	cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
 		/ params->rate;
 	port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
@@ -905,6 +1094,7 @@ rte_sched_port_free(struct rte_sched_port *port)
 	for (i = 0; i < port->n_subports_per_port; i++)
 		rte_sched_subport_free(port, port->subports[i]);
 
+	rte_free(port->subport_profiles);
 	rte_free(port);
 }
 
@@ -961,6 +1151,7 @@ rte_sched_free_memory(struct rte_sched_port *port, uint32_t n_subports)
 		rte_sched_subport_free(port, subport);
 	}
 
+	rte_free(port->subport_profiles);
 	rte_free(port);
 }
 
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 8a5a93c98..39339b7f1 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -192,6 +192,20 @@ struct rte_sched_subport_params {
 #endif
 };
 
+struct rte_sched_subport_profile_params {
+	/** Token bucket rate (measured in bytes per second) */
+	uint64_t tb_rate;
+
+	/** Token bucket size (measured in credits) */
+	uint64_t tb_size;
+
+	/** Traffic class rates (measured in bytes per second) */
+	uint64_t tc_rate[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+
+	/** Enforcement period for rates (measured in milliseconds) */
+	uint64_t tc_period;
+};
+
 /** Subport statistics */
 struct rte_sched_subport_stats {
 	/** Number of packets successfully written */
@@ -254,6 +268,17 @@ struct rte_sched_port_params {
 	/** Number of subports */
 	uint32_t n_subports_per_port;
 
+	/** subport profile table.
+	 * Every pipe is configured using one of the profiles from this table.
+	 */
+	struct rte_sched_subport_profile_params *subport_profiles;
+
+	/** Profiles in the pipe profile table */
+	uint32_t n_subport_profiles;
+
+	/** Max allowed profiles in the pipe profile table */
+	uint32_t n_max_subport_profiles;
+
 	/** Maximum number of subport pipes.
 	 * This parameter is used to reserve a fixed number of bits
 	 * in struct rte_mbuf::sched.queue_id for the pipe_id for all
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth
    @ 2020-10-09 12:39  3%   ` Savinay Dharmappa
  2020-10-09 12:39  4%     ` [dpdk-dev] [PATCH v9 1/8] sched: add support profile table Savinay Dharmappa
                       ` (3 more replies)
  1 sibling, 4 replies; 200+ results
From: Savinay Dharmappa @ 2020-10-09 12:39 UTC (permalink / raw)
  To: cristian.dumitrescu, jasvinder.singh, dev; +Cc: savinay.dharmappa

DPDK sched library allows runtime configuration of the pipe profiles to the
pipes of the subport once scheduler hierarchy is constructed. However, to
change the subport level bandwidth, existing hierarchy needs to be
dismantled and whole process of building hierarchy under subport nodes
needs to be repeated which might result in router downtime. Furthermore,
due to lack of dynamic configuration of the subport bandwidth profile
configuration (shaper and Traffic class rates), the user application
is unable to dynamically re-distribute the excess-bandwidth of one subport
among other subports in the scheduler hierarchy. Therefore, it is also not
possible to adjust the subport bandwidth profile in sync with dynamic
changes in pipe profiles of subscribers who want to consume higher
bandwidth opportunistically.

This patch series implements dynamic configuration of the subport bandwidth
profile to overcome the runtime situation when group of subscribers are not
using the allotted bandwidth and dynamic bandwidth re-distribution is
needed the without making any structural changes in the hierarchy.

The implementation work includes refactoring the existing api and
data structures defined for port and subport level, new APIs for
adding subport level bandwidth profiles that can be used in runtime.

---
v8 -> v9
   - updated ABI section in release notes.
   - Addressed review comments from patch 8
     of v8.

v7 -> v8
   - Fix doxygen and clang build error.

v6 -> v7
   - Fix checkpatch warning
     and patch apply issue.

v5 -> v6
   - Fix build warning.
   - change cli of tmgr :
       * remove queue size and pipes per subport 
	 from cmdline argument to add traffic 
	 manager subport profile.
      
       * add pipes per subport as cmdline argument
	 to create traffic manger port.
   
v4 -> v5
   - Review comments from patch 3 & 6
     from v4 are addressed.

v3 -> v4
   - Fix patch apply issue.

v2 -> v3
   - Review comments from patch 3 & 5
     from v2 are addressed.

v1 -> v2
   - Fix checkpatch warnings.
---

Savinay Dharmappa (8):
  sched: add support profile table
  sched: introduce subport profile add function
  sched: update subport rate dynamically
  example/qos_sched: update subport rate dynamically
  example/ip_pipeline: update subport rate dynamically
  drivers/softnic: update subport rate dynamically
  app/test_sched: update subport rate dynamically
  sched: remove redundant code

 app/test/test_sched.c                         |  15 +-
 doc/guides/rel_notes/deprecation.rst          |   6 -
 doc/guides/rel_notes/release_20_11.rst        |  15 +
 .../net/softnic/rte_eth_softnic_internals.h   |  11 +-
 drivers/net/softnic/rte_eth_softnic_tm.c      | 243 +++++--
 examples/ip_pipeline/cli.c                    |  68 +-
 examples/ip_pipeline/tmgr.c                   | 121 +++-
 examples/ip_pipeline/tmgr.h                   |   5 +-
 examples/qos_sched/cfg_file.c                 | 151 ++--
 examples/qos_sched/cfg_file.h                 |   4 +
 examples/qos_sched/init.c                     |  21 +-
 examples/qos_sched/main.h                     |   1 +
 examples/qos_sched/profile.cfg                |   3 +
 lib/librte_sched/rte_sched.c                  | 678 ++++++++++++------
 lib/librte_sched/rte_sched.h                  |  73 +-
 lib/librte_sched/rte_sched_version.map        |   2 +
 16 files changed, 972 insertions(+), 445 deletions(-)

Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v6 14/14] doc: update patch cheatsheet to use meson
    2020-10-09 10:21  9%   ` [dpdk-dev] [PATCH v6 12/14] doc: remove references to make from contributing guide Ciara Power
@ 2020-10-09 10:21  2%   ` Ciara Power
  1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2020-10-09 10:21 UTC (permalink / raw)
  To: dev; +Cc: Kevin Laatz

From: Kevin Laatz <kevin.laatz@intel.com>

With 'make' being removed, the patch cheatsheet needs to be updated to
remove any references to 'make'. These references have been replaced with
meson alternatives in this patch.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 .../contributing/img/patch_cheatsheet.svg     | 582 ++++++++----------
 1 file changed, 270 insertions(+), 312 deletions(-)

diff --git a/doc/guides/contributing/img/patch_cheatsheet.svg b/doc/guides/contributing/img/patch_cheatsheet.svg
index 85225923e1..986e4db815 100644
--- a/doc/guides/contributing/img/patch_cheatsheet.svg
+++ b/doc/guides/contributing/img/patch_cheatsheet.svg
@@ -1,6 +1,4 @@
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
 <svg
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:cc="http://creativecommons.org/ns#"
@@ -13,7 +11,7 @@
    width="210mm"
    height="297mm"
    id="svg2985"
-   inkscape:version="0.48.4 r9939"
+   inkscape:version="1.0.1 (3bc2e813f5, 2020-09-07)"
    sodipodi:docname="patch_cheatsheet.svg">
   <sodipodi:namedview
      pagecolor="#ffffff"
@@ -24,17 +22,19 @@
      guidetolerance="10"
      inkscape:pageopacity="0"
      inkscape:pageshadow="2"
-     inkscape:window-width="1184"
-     inkscape:window-height="1822"
+     inkscape:window-width="1920"
+     inkscape:window-height="1017"
      id="namedview274"
      showgrid="false"
-     inkscape:zoom="1.2685914"
-     inkscape:cx="289.93958"
-     inkscape:cy="509.84194"
-     inkscape:window-x="0"
-     inkscape:window-y="19"
-     inkscape:window-maximized="0"
-     inkscape:current-layer="g3272" />
+     inkscape:zoom="0.89702958"
+     inkscape:cx="246.07409"
+     inkscape:cy="416.76022"
+     inkscape:window-x="1072"
+     inkscape:window-y="-8"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="layer1"
+     inkscape:document-rotation="0"
+     inkscape:snap-grids="false" />
   <defs
      id="defs3">
     <linearGradient
@@ -549,347 +549,336 @@
       </g>
     </switch>
     <g
-       transform="matrix(0.89980358,0,0,0.89980358,45.57817,-2.8793563)"
+       transform="matrix(0.89980358,0,0,0.89980358,57.57817,-2.8793563)"
        id="g4009">
       <text
          x="325.02054"
          y="107.5126"
          id="text3212"
          xml:space="preserve"
-         style="font-size:43.11383057px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-         sodipodi:linespacing="125%"
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:43.1138px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
          transform="scale(1.193782,0.83767389)"><tspan
            x="325.02054"
            y="107.5126"
-           id="tspan3214">CHEATSHEET</tspan></text>
+           id="tspan3214"
+           style="font-family:monospace">CHEATSHEET</tspan></text>
       <text
          x="386.51117"
          y="58.178116"
          transform="scale(1.0054999,0.99453018)"
          id="text3212-1"
          xml:space="preserve"
-         style="font-size:42.11373901px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:42.1137px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="386.51117"
            y="58.178116"
-           id="tspan3214-7">PATCH SUBMIT</tspan></text>
+           id="tspan3214-7"
+           style="font-family:monospace">PATCH SUBMIT</tspan></text>
     </g>
     <rect
-       width="714.94495"
-       height="88.618027"
-       rx="20.780111"
-       ry="15.96909"
-       x="14.574773"
-       y="7.0045133"
+       width="759.50977"
+       height="88.591248"
+       rx="22.075403"
+       ry="15.964265"
+       x="14.588161"
+       y="7.0179014"
        id="rect3239"
-       style="fill:none;stroke:#00233b;stroke-width:0.87678075;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:0.903557;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <rect
-       width="713.28113"
-       height="887.29156"
-       rx="17.656931"
-       ry="17.280584"
-       x="15.406689"
-       y="104.73515"
+       width="757.84167"
+       height="887.2605"
+       rx="18.760006"
+       ry="17.27998"
+       x="15.422211"
+       y="104.75068"
        id="rect3239-0"
-       style="fill:none;stroke:#00233b;stroke-width:1.00973284;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:1.04078;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <rect
-       width="694.94904"
-       height="381.31"
-       rx="9.4761629"
-       ry="9.0904856"
-       x="24.336016"
-       y="601.75836"
+       width="732.82446"
+       height="381.28253"
+       rx="9.9926233"
+       ry="9.0898304"
+       x="24.349754"
+       y="601.77209"
        id="rect3239-0-9-4"
-       style="fill:none;stroke:#00233b;stroke-width:1.02322531;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:1.0507;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <path
-       d="m 386.3921,327.23442 323.14298,0"
+       d="M 422.0654,327.23442 H 709.53508"
        id="path4088"
-       style="fill:none;stroke:#00233b;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="fill:none;stroke:#00233b;stroke-width:0.943189px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        inkscape:connector-curvature="0" />
     <text
-       x="396.18015"
+       x="428.18015"
        y="314.45731"
        id="text4090"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="396.18015"
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="428.18015"
          y="314.45731"
          id="tspan4092"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Patch Pre-Checks</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Patch Pre-Checks</tspan></text>
     <text
        x="43.44949"
        y="147.32129"
        id="text4090-4"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
          x="43.44949"
          y="147.32129"
          id="tspan4092-3"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Commit Pre-Checks</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Commit Pre-Checks</tspan></text>
     <text
-       x="397.1235"
+       x="429.1235"
        y="144.8549"
        id="text4090-4-3"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="397.1235"
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="429.1235"
          y="144.8549"
          id="tspan4092-3-3"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Bugfix?</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Bugfix?</tspan></text>
     <text
        x="41.215897"
        y="634.38617"
        id="text4090-1"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
          x="41.215897"
          y="634.38617"
          id="tspan4092-38"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Git send-email </tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Git send-email </tspan></text>
     <path
        d="m 31.232443,642.80575 376.113467,0"
        id="path4088-7"
        style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
        inkscape:connector-curvature="0" />
     <rect
-       width="342.13785"
-       height="230.74609"
-       rx="10.411126"
-       ry="10.411126"
-       x="25.418407"
-       y="114.92036"
+       width="376.65033"
+       height="230.70007"
+       rx="11.461329"
+       ry="10.40905"
+       x="25.441414"
+       y="114.94337"
        id="rect3239-0-9-4-2"
-       style="fill:none;stroke:#00233b;stroke-width:0.93674862;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:0.982762;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <text
        x="43.44949"
        y="385.8045"
        id="text4090-86"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
          x="43.44949"
          y="385.8045"
          id="tspan4092-5"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Compile Pre-Checks</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Compile Pre-Checks</tspan></text>
     <g
-       transform="translate(352.00486,-348.25973)"
+       transform="matrix(1.0077634,0,0,1,384.57109,-348.25973)"
        id="g3295">
       <text
          x="43.87738"
          y="568.03088"
          id="text4090-8-14"
          xml:space="preserve"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="43.87738"
            y="568.03088"
            id="tspan4289"
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Include warning/error</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Include warning/error</tspan></text>
       <text
          x="43.87738"
          y="537.71906"
          id="text4090-8-14-4"
          xml:space="preserve"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="43.87738"
            y="537.71906"
            id="tspan4289-1"
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Fixes: line</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Fixes: line</tspan></text>
       <text
          x="43.87738"
          y="598.9939"
          id="text4090-8-14-0"
          xml:space="preserve"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="43.87738"
            y="598.9939"
            id="tspan4289-2"
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ How to reproduce</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ How to reproduce</tspan></text>
     </g>
     <g
-       transform="translate(-2.6258125,-26.708615)"
+       transform="matrix(0.88614399,0,0,1.0199334,-5.7864591,-38.84504)"
        id="g4115">
       <g
          id="g3272">
         <text
-           sodipodi:linespacing="125%"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
            xml:space="preserve"
            id="text4090-8-1"
            y="454.36987"
            x="49.093246"><tspan
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
              id="tspan4092-8-7"
              y="454.36987"
              x="49.093246">+ build gcc icc clang </tspan></text>
         <text
-           sodipodi:linespacing="125%"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
+           xml:space="preserve"
+           id="text581"
+           y="454.36987"
+           x="49.093246" />
+        <text
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
            xml:space="preserve"
            id="text4090-8-2"
            y="516.59979"
            x="49.093246"><tspan
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
              id="tspan4092-8-79"
              y="516.59979"
-             x="49.093246">+ make test doc </tspan></text>
+             x="49.093246">+ meson -Denable_docs=true</tspan></text>
         <text
-           sodipodi:linespacing="125%"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
            xml:space="preserve"
            id="text4090-8-2-0-0"
            y="544.71033"
            x="49.093246"><tspan
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
              id="tspan4092-8-79-9-0"
              y="544.71033"
-             x="49.093246">+ make examples</tspan></text>
+             x="49.093246">+ meson -Dexamples=all</tspan></text>
         <text
-           sodipodi:linespacing="125%"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
            xml:space="preserve"
            id="text4090-8-2-0-07"
            y="576.83533"
            x="49.093246"><tspan
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
              id="tspan4092-8-79-9-3"
              y="576.83533"
-             x="49.093246">+ make shared-lib</tspan></text>
+             x="49.093246"
+             transform="matrix(1.0305467,0,0,1,-1.5447426,0)">+ meson -Ddefault_library=shared</tspan></text>
         <text
-           sodipodi:linespacing="125%"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
            xml:space="preserve"
            id="text4090-8-2-0-07-4"
            y="604.88947"
            x="49.093246"><tspan
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
              id="tspan4092-8-79-9-3-9"
              y="604.88947"
              x="49.093246">+ library ABI version</tspan></text>
         <text
-           sodipodi:linespacing="125%"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
            xml:space="preserve"
            id="text4090-8-2-9"
            y="486.56659"
            x="49.093246"><tspan
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
              id="tspan4092-8-79-3"
              y="486.56659"
              x="49.093246">+ build 32 and 64 bits</tspan></text>
       </g>
     </g>
     <text
-       x="74.388756"
-       y="914.65686"
+       x="72.598656"
+       y="937.21002"
        id="text4090-8-1-8-65-9"
        xml:space="preserve"
-       style="font-size:19px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace"
-       sodipodi:linespacing="125%"><tspan
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.2959px;line-height:0%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.02466"
+       transform="scale(1.0246575,0.97593587)"><tspan
          sodipodi:role="line"
          id="tspan3268"
-         x="74.388756"
-         y="914.65686">git send-email *.patch --annotate --to &lt;maintainer&gt;</tspan><tspan
+         x="72.598656"
+         y="937.21002"
+         style="font-size:19.4685px;line-height:1.25;font-family:monospace;stroke-width:1.02466">git send-email *.patch --annotate --to &lt;maintainer&gt;</tspan><tspan
          sodipodi:role="line"
          id="tspan3272"
-         x="74.388756"
-         y="938.40686">  --cc dev@dpdk.org [ --cc other@participants.com</tspan><tspan
+         x="72.598656"
+         y="961.54565"
+         style="font-size:19.4685px;line-height:1.25;font-family:monospace;stroke-width:1.02466">  --cc dev@dpdk.org [ --cc other@participants.com</tspan><tspan
          sodipodi:role="line"
-         x="74.388756"
-         y="962.15686"
-         id="tspan3266">  --cover-letter -v[N] --in-reply-to &lt;message ID&gt; ]</tspan></text>
+         x="72.598656"
+         y="985.88129"
+         id="tspan3266"
+         style="font-size:19.4685px;line-height:1.25;font-family:monospace;stroke-width:1.02466">  --cover-letter -v[N] --in-reply-to &lt;message ID&gt; ]</tspan></text>
     <text
        x="543.47675"
        y="1032.3459"
        id="text4090-8-7-8-7-6-3-8-2-5"
        xml:space="preserve"
-       style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-       sodipodi:linespacing="125%"><tspan
+       style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
          x="543.47675"
          y="1032.3459"
          id="tspan4092-8-6-3-1-8-4-4-5-3"
-         style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">harry.van.haaren@intel.com</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">harry.van.haaren@intel.com</tspan></text>
     <rect
-       width="678.14105"
-       height="87.351799"
-       rx="6.7972355"
-       ry="6.7972355"
-       x="31.865864"
-       y="888.44696"
+       width="711.56055"
+       height="87.327599"
+       rx="7.1322103"
+       ry="6.795352"
+       x="31.877964"
+       y="888.45905"
        id="rect3239-0-9-4-3"
-       style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:1.0242;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <text
        x="543.29498"
        y="1018.1843"
        id="text4090-8-7-8-7-6-3-8-2-5-3"
        xml:space="preserve"
-       style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-       sodipodi:linespacing="125%"><tspan
+       style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
          x="543.29498"
          y="1018.1843"
          id="tspan4092-8-6-3-1-8-4-4-5-3-7"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Suggestions / Updates?</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Suggestions / Updates?</tspan></text>
     <g
        id="g3268"
        transform="translate(0,-6)">
       <text
-         sodipodi:linespacing="125%"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
          xml:space="preserve"
          id="text4090-8-1-8"
          y="704.07019"
          x="41.658669"><tspan
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
            id="tspan4092-8-7-6"
            y="704.07019"
            x="41.658669">+ Patch version ( eg: -v2 ) </tspan></text>
       <text
-         sodipodi:linespacing="125%"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
          xml:space="preserve"
          id="text4090-8-1-8-0"
          y="736.29175"
          x="41.658669"><tspan
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
            id="tspan4092-8-7-6-2"
            y="736.29175"
            x="41.658669">+ Patch version annotations</tspan></text>
       <text
-         sodipodi:linespacing="125%"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
          xml:space="preserve"
          id="text4090-8-1-8-6"
          y="766.70355"
          x="41.911205"><tspan
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
            id="tspan4092-8-7-6-1"
            y="766.70355"
            x="41.911205">+ Send --to maintainer </tspan></text>
       <text
-         sodipodi:linespacing="125%"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
          xml:space="preserve"
          id="text4090-8-1-8-6-3"
          y="795.30548"
          x="41.658669"><tspan
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
            id="tspan4092-8-7-6-1-8"
            y="795.30548"
            x="41.658669">+ Send --cc dev@dpdk.org </tspan></text>
       <text
-         sodipodi:linespacing="125%"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
          xml:space="preserve"
          id="text4090-8-1-8-9"
          y="675.25287"
          x="41.658669"><tspan
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
            id="tspan4092-8-7-6-9"
            y="675.25287"
            x="41.658669">+ Cover letter</tspan></text>
@@ -897,73 +886,70 @@
          id="g3303"
          transform="translate(1.0962334,-40.034939)">
         <text
-           sodipodi:linespacing="125%"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
            xml:space="preserve"
            id="text4090-8-1-8-65"
            y="868.70337"
            x="41.572586"><tspan
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
              id="tspan4092-8-7-6-10"
              y="868.70337"
              x="41.572586">+ Send --in-reply-to &lt;message ID&gt;<tspan
-   style="font-size:20px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+   style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:20px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
    id="tspan3184" /></tspan></text>
         <text
-           sodipodi:linespacing="125%"
-           style="font-size:25.6917057px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+           style="font-style:normal;font-weight:normal;font-size:25.6917px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
            xml:space="preserve"
            id="text4090-8-1-8-9-1"
            y="855.79816"
            x="460.18405"><tspan
-             style="font-size:11.56126785px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:11.5613px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
              id="tspan4092-8-7-6-9-7"
              y="855.79816"
              x="460.18405">****</tspan></text>
       </g>
     </g>
     <text
-       x="685.67828"
+       x="697.67828"
        y="76.55056"
        id="text4090-8-1-8-9-1-9"
        xml:space="preserve"
-       style="font-size:20.20989037px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="685.67828"
+       style="font-style:normal;font-weight:normal;font-size:20.2099px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="697.67828"
          y="76.55056"
          id="tspan4092-8-7-6-9-7-4"
-         style="font-size:9.09445095px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">v1.0</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:9.09445px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">v2.0</tspan></text>
     <rect
-       width="342.3053"
-       height="155.54948"
-       rx="9.2344503"
-       ry="9.2344503"
-       x="377.58942"
-       y="114.55766"
+       width="347.40179"
+       height="155.50351"
+       rx="9.3719397"
+       ry="9.2317209"
+       x="412.60239"
+       y="114.58065"
        id="rect3239-0-9-4-2-1"
-       style="fill:none;stroke:#00233b;stroke-width:0.76930124;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:0.774892;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <rect
-       width="342.12564"
-       height="236.79482"
-       rx="10.647112"
-       ry="9.584527"
-       x="25.642178"
-       y="356.86249"
+       width="377.75555"
+       height="234.52185"
+       rx="11.755931"
+       ry="9.4925261"
+       x="25.663876"
+       y="356.88416"
        id="rect3239-0-9-4-2-0"
-       style="fill:none;stroke:#00233b;stroke-width:0.9489302;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:0.99232;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <rect
-       width="341.98428"
-       height="312.73181"
-       rx="8.5358429"
-       ry="8.5358429"
-       x="377.96762"
-       y="280.45331"
+       width="343.53604"
+       height="312.67508"
+       rx="8.5745735"
+       ry="8.5342941"
+       x="414.29037"
+       y="280.48166"
        id="rect3239-0-9-4-2-1-9"
-       style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:1.00217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <path
-       d="m 387.02742,157.3408 323.14298,0"
+       d="M 419.35634,157.3408 H 710.1704"
        id="path4088-8"
-       style="fill:none;stroke:#00233b;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="fill:none;stroke:#00233b;stroke-width:0.94866px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        inkscape:connector-curvature="0" />
     <path
        d="m 36.504486,397.33869 323.142974,0"
@@ -971,9 +957,9 @@
        style="fill:none;stroke:#00233b;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        inkscape:connector-curvature="0" />
     <path
-       d="m 35.494337,156.92238 323.142983,0"
+       d="M 35.494337,156.92238 H 372.01481"
        id="path4088-4"
-       style="fill:none;stroke:#00233b;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="fill:none;stroke:#00233b;stroke-width:1.02049px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        inkscape:connector-curvature="0" />
     <g
        transform="translate(1.0962334,-30.749225)"
@@ -983,45 +969,41 @@
          y="214.1572"
          id="text4090-8-11"
          xml:space="preserve"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="45.371201"
            y="214.1572"
            id="tspan4092-8-52"
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Signed-off-by: </tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Signed-off-by: </tspan></text>
       <text
          x="45.371201"
          y="243.81795"
          id="text4090-8-7-8"
          xml:space="preserve"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="45.371201"
            y="243.81795"
            id="tspan4092-8-6-3"
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Suggested-by:</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Suggested-by:</tspan></text>
       <text
          x="45.371201"
          y="273.90939"
          id="text4090-8-7-8-7"
          xml:space="preserve"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="45.371201"
            y="273.90939"
            id="tspan4092-8-6-3-1"
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Reported-by:</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Reported-by:</tspan></text>
       <text
          x="45.371201"
          y="304.00082"
          id="text4090-8-7-8-7-6"
          xml:space="preserve"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="45.371201"
            y="304.00082"
            id="tspan4092-8-6-3-1-8"
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Tested-by:</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Tested-by:</tspan></text>
       <g
          id="g3297"
          transform="translate(1.1147904,-7.2461378)">
@@ -1030,110 +1012,102 @@
            y="368.8187"
            id="text4090-8-7-8-7-6-3"
            xml:space="preserve"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-           sodipodi:linespacing="125%"><tspan
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
              x="45.371201"
              y="368.8187"
              id="tspan4092-8-6-3-1-8-4"
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Previous Acks</tspan></text>
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Previous Acks</tspan></text>
         <text
            x="235.24362"
            y="360.3028"
            id="text4090-8-1-8-9-1-4"
            xml:space="preserve"
-           style="font-size:25.6917057px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-           sodipodi:linespacing="125%"><tspan
+           style="font-style:normal;font-weight:normal;font-size:25.6917px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
              x="235.24362"
              y="360.3028"
              id="tspan4092-8-7-6-9-7-0"
-             style="font-size:11.56126785px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:11.5613px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
       </g>
       <text
          x="45.371201"
          y="334.52298"
          id="text4090-8-7-8-7-6-3-4"
          xml:space="preserve"
-         style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="45.371201"
            y="334.52298"
            id="tspan4092-8-6-3-1-8-4-0"
-           style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Commit message</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Commit message</tspan></text>
     </g>
     <rect
        width="295.87207"
        height="164.50136"
        rx="7.3848925"
        ry="4.489974"
-       x="414.80502"
+       x="444.80502"
        y="611.47064"
        id="rect3239-0-9-4-2-1-9-9"
-       style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <text
-       x="439.4429"
+       x="469.4429"
        y="638.35608"
        id="text4090-1-4"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="439.4429"
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="469.4429"
          y="638.35608"
          id="tspan4092-38-8"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Mailing List</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Mailing List</tspan></text>
     <text
-       x="431.55353"
+       x="461.55353"
        y="675.59857"
        id="text4090-8-5-6-9-4-6-6-8"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="431.55353"
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="461.55353"
          y="675.59857"
          id="tspan4092-8-5-5-3-4-0-6-2"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Acked-by:</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Acked-by:</tspan></text>
     <text
-       x="431.39734"
+       x="461.39734"
        y="734.18231"
        id="text4090-8-5-6-9-4-6-6-8-5"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="431.39734"
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="461.39734"
          y="734.18231"
          id="tspan4092-8-5-5-3-4-0-6-2-1"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Reviewed-by:</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Reviewed-by:</tspan></text>
     <text
-       x="450.8428"
+       x="480.8428"
        y="766.5578"
        id="text4090-8-5-6-9-4-6-6-8-7"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="450.8428"
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="480.8428"
          y="766.5578"
          id="tspan4092-8-5-5-3-4-0-6-2-11"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Nack (refuse patch)</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Nack (refuse patch)</tspan></text>
     <path
-       d="m 426.99385,647.80575 272.72607,0"
+       d="M 456.99385,647.80575 H 729.71992"
        id="path4088-7-5"
-       style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+       style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        inkscape:connector-curvature="0" />
     <path
-       d="m 424.7332,742.35699 272.72607,0"
+       d="M 454.7332,742.35699 H 727.45927"
        id="path4088-7-5-2"
-       style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+       style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        inkscape:connector-curvature="0" />
     <text
-       x="431.39734"
+       x="461.39734"
        y="704.78278"
        id="text4090-8-5-6-9-4-6-6-8-5-1"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="431.39734"
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="461.39734"
          y="704.78278"
          id="tspan4092-8-5-5-3-4-0-6-2-1-7"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Tested-by:</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Tested-by:</tspan></text>
     <g
        transform="translate(1.0962334,-2.7492248)"
        id="g3613">
@@ -1142,22 +1116,21 @@
          y="1007.5879"
          id="text4090-8-7-8-7-6-3-8"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="43.146141"
            y="1007.5879"
            id="tspan4092-8-6-3-1-8-4-4"
-           style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">Previous Acks only when fixing typos, rebased, or checkpatch issues.</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">Previous Acks only when fixing typos, rebased, or checkpatch issues.</tspan></text>
       <text
          x="30.942892"
          y="1011.3757"
          id="text4090-8-7-8-7-6-3-8-4-1"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="30.942892"
            y="1011.3757"
            id="tspan4092-8-6-3-1-8-4-4-55-7"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
     </g>
     <g
        transform="translate(1.0962334,-2.7492248)"
@@ -1167,35 +1140,34 @@
          y="1020.4383"
          id="text4090-8-7-8-7-6-3-8-4"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="42.176418"
            y="1020.4383"
            id="tspan4092-8-6-3-1-8-4-4-55"
-           style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">The version.map function names must be in alphabetical order.</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">The version.map function names must be in alphabetical order.</tspan></text>
       <text
          x="30.942892"
          y="1024.2014"
          id="text4090-8-7-8-7-6-3-8-4-1-5"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="30.942892"
            y="1024.2014"
            id="tspan4092-8-6-3-1-8-4-4-55-7-2"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
       <text
          x="25.247679"
          y="1024.2014"
          id="text4090-8-7-8-7-6-3-8-4-1-5-6"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="25.247679"
            y="1024.2014"
            id="tspan4092-8-6-3-1-8-4-4-55-7-2-8"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
     </g>
     <g
-       transform="translate(1.0962334,-30.749225)"
+       transform="matrix(1.0211743,0,0,1,25.427515,-30.749225)"
        id="g3275">
       <g
          id="g3341">
@@ -1204,67 +1176,61 @@
            y="390.17807"
            id="text4090-8"
            xml:space="preserve"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-           sodipodi:linespacing="125%"><tspan
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
              x="394.78601"
              y="390.17807"
              id="tspan4092-8"
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Rebase to git  </tspan></text>
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Rebase to git  </tspan></text>
         <text
            x="394.78601"
            y="420.24835"
            id="text4090-8-5"
            xml:space="preserve"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-           sodipodi:linespacing="125%"><tspan
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
              x="394.78601"
              y="420.24835"
              id="tspan4092-8-5"
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Checkpatch </tspan></text>
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Checkpatch </tspan></text>
         <text
            x="394.78601"
            y="450.53394"
            id="text4090-8-5-6"
            xml:space="preserve"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-           sodipodi:linespacing="125%"><tspan
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
              x="394.78601"
              y="450.53394"
              id="tspan4092-8-5-5"
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ ABI breakage </tspan></text>
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ ABI breakage </tspan></text>
         <text
            x="394.78601"
            y="513.13031"
            id="text4090-8-5-6-9-4"
            xml:space="preserve"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-           sodipodi:linespacing="125%"><tspan
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
              x="394.78601"
              y="513.13031"
              id="tspan4092-8-5-5-3-4"
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Maintainers file</tspan></text>
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Maintainers file</tspan></text>
         <text
            x="394.78601"
            y="573.48621"
            id="text4090-8-5-6-9-4-6"
            xml:space="preserve"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-           sodipodi:linespacing="125%"><tspan
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
              x="394.78601"
              y="573.48621"
              id="tspan4092-8-5-5-3-4-0"
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Release notes</tspan></text>
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Release notes</tspan></text>
         <text
            x="395.79617"
            y="603.98718"
            id="text4090-8-5-6-9-4-6-6"
            xml:space="preserve"
-           style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-           sodipodi:linespacing="125%"><tspan
+           style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
              x="395.79617"
              y="603.98718"
              id="tspan4092-8-5-5-3-4-0-6"
-             style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Documentation</tspan></text>
+             style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Documentation</tspan></text>
         <g
            transform="translate(0,-0.83470152)"
            id="g3334">
@@ -1276,24 +1242,22 @@
                y="468.01297"
                id="text4090-8-1-8-9-1-4-1"
                xml:space="preserve"
-               style="font-size:25.6917057px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-               sodipodi:linespacing="125%"><tspan
+               style="font-style:normal;font-weight:normal;font-size:25.6917px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
                  x="660.46729"
                  y="468.01297"
                  id="tspan4092-8-7-6-9-7-0-7"
-                 style="font-size:11.56126785px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">**</tspan></text>
+                 style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:11.5613px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">**</tspan></text>
           </g>
           <text
              x="394.78601"
              y="483.59955"
              id="text4090-8-5-6-9"
              xml:space="preserve"
-             style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-             sodipodi:linespacing="125%"><tspan
+             style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
                x="394.78601"
                y="483.59955"
                id="tspan4092-8-5-5-3"
-               style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Update version.map</tspan></text>
+               style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Update version.map</tspan></text>
         </g>
         <g
            id="g3428"
@@ -1303,12 +1267,11 @@
              y="541.38928"
              id="text4090-8-5-6-9-4-6-1"
              xml:space="preserve"
-             style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-             sodipodi:linespacing="125%"><tspan
+             style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
                x="394.78601"
                y="541.38928"
                id="tspan4092-8-5-5-3-4-0-7"
-               style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Doxygen</tspan></text>
+               style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Doxygen</tspan></text>
           <g
              transform="translate(-119.92979,57.949844)"
              id="g3267-9">
@@ -1317,28 +1280,26 @@
                y="473.13675"
                id="text4090-8-1-8-9-1-4-1-4"
                xml:space="preserve"
-               style="font-size:25.6917057px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-               sodipodi:linespacing="125%"><tspan
+               style="font-style:normal;font-weight:normal;font-size:25.6917px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
                  x="628.93628"
                  y="473.13675"
                  id="tspan4092-8-7-6-9-7-0-7-8"
-                 style="font-size:11.56126785px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">***</tspan></text>
+                 style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:11.5613px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">***</tspan></text>
           </g>
         </g>
       </g>
     </g>
     <text
-       x="840.1828"
-       y="234.34692"
-       transform="matrix(0.70710678,0.70710678,-0.70710678,0.70710678,0,0)"
+       x="861.39557"
+       y="213.1337"
+       transform="rotate(45)"
        id="text4090-8-5-6-9-4-6-6-8-7-4"
        xml:space="preserve"
-       style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
-       sodipodi:linespacing="125%"><tspan
-         x="840.1828"
-         y="234.34692"
+       style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+         x="861.39557"
+         y="213.1337"
          id="tspan4092-8-5-5-3-4-0-6-2-11-0"
-         style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+</tspan></text>
     <g
        transform="translate(1.0962334,-2.7492248)"
        id="g3595">
@@ -1347,42 +1308,41 @@
          y="1037.0271"
          id="text4090-8-7-8-7-6-3-8-4-1-2"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="30.942892"
            y="1037.0271"
            id="tspan4092-8-6-3-1-8-4-4-55-7-3"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
       <text
          x="25.247679"
          y="1037.0271"
          id="text4090-8-7-8-7-6-3-8-4-1-2-5"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="25.247679"
            y="1037.0271"
            id="tspan4092-8-6-3-1-8-4-4-55-7-3-7"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
       <text
          x="19.552465"
          y="1037.0271"
          id="text4090-8-7-8-7-6-3-8-4-1-2-7"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="19.552465"
            y="1037.0271"
            id="tspan4092-8-6-3-1-8-4-4-55-7-3-9"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
       <text
          x="42.830166"
          y="1033.2393"
          id="text4090-8-7-8-7-6-3-8-4-8"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="42.830166"
            y="1033.2393"
            id="tspan4092-8-6-3-1-8-4-4-55-2"
-           style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">New header files must get a new page in the API docs.</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">New header files must get a new page in the API docs.</tspan></text>
     </g>
     <g
        transform="translate(1.0962334,-2.7492248)"
@@ -1392,52 +1352,51 @@
          y="1046.0962"
          id="text4090-8-7-8-7-6-3-8-2"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
-         sodipodi:linespacing="125%"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="42.212418"
            y="1046.0962"
            id="tspan4092-8-6-3-1-8-4-4-5"
-           style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">Available from patchwork, or email header. Reply to Cover letters.</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">Available from patchwork, or email header. Reply to Cover letters.</tspan></text>
       <text
          x="31.140535"
          y="1049.8527"
          id="text4090-8-7-8-7-6-3-8-4-1-2-2"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="31.140535"
            y="1049.8527"
            id="tspan4092-8-6-3-1-8-4-4-55-7-3-3"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
       <text
          x="25.445322"
          y="1049.8527"
          id="text4090-8-7-8-7-6-3-8-4-1-2-5-2"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="25.445322"
            y="1049.8527"
            id="tspan4092-8-6-3-1-8-4-4-55-7-3-7-2"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
       <text
          x="19.750109"
          y="1049.8527"
          id="text4090-8-7-8-7-6-3-8-4-1-2-7-1"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="19.750109"
            y="1049.8527"
            id="tspan4092-8-6-3-1-8-4-4-55-7-3-9-6"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
       <text
          x="14.016749"
          y="1049.8527"
          id="text4090-8-7-8-7-6-3-8-4-1-2-7-1-8"
          xml:space="preserve"
-         style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+         style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
            x="14.016749"
            y="1049.8527"
            id="tspan4092-8-6-3-1-8-4-4-55-7-3-9-6-5"
-           style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+           style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
     </g>
     <rect
        width="196.44218"
@@ -1449,36 +1408,35 @@
        id="rect3239-0-9-4-2-1-9-9-7"
        style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
     <rect
-       width="678.43036"
-       height="43.497677"
-       rx="7.8557949"
-       ry="6.7630997"
-       x="31.274473"
-       y="836.69745"
+       width="710.73767"
+       height="43.476074"
+       rx="8.2298937"
+       ry="6.7597408"
+       x="31.285275"
+       y="836.70825"
        id="rect3239-0-9-4-3-6"
-       style="fill:none;stroke:#00233b;stroke-width:0.92794865;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+       style="fill:none;stroke:#00233b;stroke-width:0.949551;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
     <text
        x="73.804535"
        y="864.28137"
        id="text4090-8-1-8-65-9-1"
        xml:space="preserve"
-       style="font-size:19px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace"
-       sodipodi:linespacing="125%"><tspan
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
          sodipodi:role="line"
          x="73.804535"
          y="864.28137"
-         id="tspan3266-8">git format-patch -[N]</tspan></text>
+         id="tspan3266-8"
+         style="font-size:19px;line-height:1.25;font-family:monospace">git format-patch -[N]</tspan></text>
     <text
        x="342.70221"
        y="862.83478"
        id="text4090-8-1-8-65-9-1-7"
        xml:space="preserve"
-       style="font-size:19px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace"
-       sodipodi:linespacing="125%"><tspan
+       style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
          sodipodi:role="line"
          x="342.70221"
          y="862.83478"
          id="tspan3266-8-2"
-         style="font-size:14px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">// creates .patch files for final review</tspan></text>
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:14px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">// creates .patch files for final review</tspan></text>
   </g>
 </svg>
-- 
2.22.0


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v6 12/14] doc: remove references to make from contributing guide
  @ 2020-10-09 10:21  9%   ` Ciara Power
  2020-10-09 10:21  2%   ` [dpdk-dev] [PATCH v6 14/14] doc: update patch cheatsheet to use meson Ciara Power
  1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2020-10-09 10:21 UTC (permalink / raw)
  To: dev; +Cc: Ciara Power, Louise Kilheeney

Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>

---
v5:
  - Removed reference to test-build.sh used for Make.
  - Added point back in for handling specific code, reworded as
    necessary.
  - Added library statistics section, removing only the mention of
    CONFIG options.
---
 doc/guides/contributing/design.rst        | 41 ++++++++---------------
 doc/guides/contributing/documentation.rst | 31 ++++-------------
 doc/guides/contributing/patches.rst       |  6 ++--
 3 files changed, 23 insertions(+), 55 deletions(-)

diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index 5fe7f63942..3e24dc1c7b 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -21,7 +21,7 @@ A file located in a subdir of "linux" is specific to this execution environment.
 
 When absolutely necessary, there are several ways to handle specific code:
 
-* Use a ``#ifdef`` with the CONFIG option in the C code.
+* Use a ``#ifdef`` with a build definition macro in the C code.
   This can be done when the differences are small and they can be embedded in the same C file:
 
   .. code-block:: c
@@ -32,30 +32,25 @@ When absolutely necessary, there are several ways to handle specific code:
      titi();
      #endif
 
-* Use the CONFIG option in the Makefile. This is done when the differences are more significant.
-  In this case, the code is split into two separate files that are architecture or environment specific.
-  This should only apply inside the EAL library.
-
-.. note::
-
-   As in the linux kernel, the ``CONFIG_`` prefix is not used in C code.
-   This is only needed in Makefiles or shell scripts.
+* Use build definition macros and conditions in the Meson build file. This is done when the differences
+  are more significant. In this case, the code is split into two separate files that are architecture
+  or environment specific. This should only apply inside the EAL library.
 
 Per Architecture Sources
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
-The following config options can be used:
+The following macro options can be used:
 
-* ``CONFIG_RTE_ARCH`` is a string that contains the name of the architecture.
-* ``CONFIG_RTE_ARCH_I686``, ``CONFIG_RTE_ARCH_X86_64``, ``CONFIG_RTE_ARCH_X86_64_32`` or ``CONFIG_RTE_ARCH_PPC_64`` are defined only if we are building for those architectures.
+* ``RTE_ARCH`` is a string that contains the name of the architecture.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_64_32`` or ``RTE_ARCH_PPC_64`` are defined only if we are building for those architectures.
 
 Per Execution Environment Sources
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The following config options can be used:
+The following macro options can be used:
 
-* ``CONFIG_RTE_EXEC_ENV`` is a string that contains the name of the executive environment.
-* ``CONFIG_RTE_EXEC_ENV_FREEBSD`` or ``CONFIG_RTE_EXEC_ENV_LINUX`` are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV`` is a string that contains the name of the executive environment.
+* ``RTE_EXEC_ENV_FREEBSD`` or ``RTE_EXEC_ENV_LINUX`` are defined only if we are building for this execution environment.
 
 Mbuf features
 -------------
@@ -87,22 +82,14 @@ requirements for preventing ABI changes when implementing statistics.
 Mechanism to allow the application to turn library statistics on and off
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Each library that maintains statistics counters should provide a single build
-time flag that decides whether the statistics counter collection is enabled or
-not. This flag should be exposed as a variable within the DPDK configuration
-file. When this flag is set, all the counters supported by current library are
+Having runtime support for enabling/disabling library statistics is recommended,
+as build-time options should be avoided. However, if build-time options are used,
+for example as in the table library, the options can be set using c_args.
+When this flag is set, all the counters supported by current library are
 collected for all the instances of every object type provided by the library.
 When this flag is cleared, none of the counters supported by the current library
 are collected for any instance of any object type provided by the library:
 
-.. code-block:: console
-
-   # DPDK file config/common_linux, config/common_freebsd, etc.
-   CONFIG_RTE_<LIBRARY_NAME>_STATS_COLLECT=y/n
-
-The default value for this DPDK configuration file variable (either "yes" or
-"no") is decided by each library.
-
 
 Prevention of ABI changes due to library statistics support
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index be985e6cf8..fd73e6538b 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -218,25 +218,14 @@ Build commands
 ~~~~~~~~~~~~~~
 
 The documentation is built using the standard DPDK build system.
-Some examples are shown below:
 
-* Generate all the documentation targets::
+To enable doc building::
 
-     make doc
+   meson configure -Denable_docs=true
 
-* Generate the Doxygen API documentation in Html::
+See :doc:`../linux_gsg/build_dpdk` for more detail on compiling DPDK with meson.
 
-     make doc-api-html
-
-* Generate the guides documentation in Html::
-
-     make doc-guides-html
-
-* Generate the guides documentation in Pdf::
-
-     make doc-guides-pdf
-
-The output of these commands is generated in the ``build`` directory::
+The output is generated in the ``build`` directory::
 
    build/doc
          |-- html
@@ -251,10 +240,6 @@ The output of these commands is generated in the ``build`` directory::
 
    Make sure to fix any Sphinx or Doxygen warnings when adding or updating documentation.
 
-The documentation output files can be removed as follows::
-
-   make doc-clean
-
 
 Document Guidelines
 -------------------
@@ -304,7 +289,7 @@ Line Length
   Long literal command lines can be shown wrapped with backslashes. For
   example::
 
-     testpmd -l 2-3 -n 4 \
+     dpdk-testpmd -l 2-3 -n 4 \
              --vdev=virtio_user0,path=/dev/vhost-net,queues=2,queue_size=1024 \
              -- -i --tx-offloads=0x0000002c --enable-lro --txq=2 --rxq=2 \
              --txd=1024 --rxd=1024
@@ -456,7 +441,7 @@ Code and Literal block sections
   For long literal lines that exceed that limit try to wrap the text at sensible locations.
   For example a long command line could be documented like this and still work if copied directly from the docs::
 
-     build/app/testpmd -l 0-2 -n3 --vdev=net_pcap0,iface=eth0     \
+     ./<build_dir>/app/dpdk-testpmd -l 0-2 -n3 --vdev=net_pcap0,iface=eth0    \
                                --vdev=net_pcap1,iface=eth1     \
                                -- -i --nb-cores=2 --nb-ports=2 \
                                   --total-num-mbufs=2048
@@ -739,9 +724,5 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati
      /** Array of physical page addresses for the mempool buffer. */
      phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
 
-* Check for Doxygen warnings in new code by checking the API documentation build::
-
-     make doc-api-html >/dev/null
-
 * Read the rendered section of the documentation that you have added for correctness, clarity and consistency
   with the surrounding text.
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 9ff60944c3..9fa5a79c85 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -486,9 +486,9 @@ By default, ABI compatibility checks are disabled.
 To enable them, a reference version must be selected via the environment
 variable ``DPDK_ABI_REF_VERSION``.
 
-The ``devtools/test-build.sh`` and ``devtools/test-meson-builds.sh`` scripts
-then build this reference version in a temporary directory and store the
-results in a subfolder of the current working directory.
+The ``devtools/test-meson-builds.sh`` script then build this reference version
+in a temporary directory and store the results in a subfolder of the current
+working directory.
 The environment variable ``DPDK_ABI_REF_DIR`` can be set so that the results go
 to a different location.
 
-- 
2.22.0


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v8 8/8] sched: remove redundant code
  @ 2020-10-09  8:28  3%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-10-09  8:28 UTC (permalink / raw)
  To: jasvinder.singh, savinay.dharmappa; +Cc: cristian.dumitrescu, dev

Hi,

> Remove redundant data structure fields references from
> functions and subport level data structures. It also
> update the release and deprecation note
> 
> Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
> ---
> 
>  doc/guides/rel_notes/deprecation.rst   |  6 ----
>  doc/guides/rel_notes/release_20_11.rst |  1 +
>  lib/librte_sched/rte_sched.c           | 42 ++------------------------
>  lib/librte_sched/rte_sched.h           | 12 --------

I wonder why this patch exists.

Documentation updates should be done when adding the feature (patch 3).
Please try to conform to the release notes format and recommendations.

Redundant code should be removed when it becomes useless.
Or should it be last because the fields are used in apps?
Anyway it is strange that a test for params->qsize is added.

About the API changes, please update the ABI section of the release notes
in each patch removing some old API.



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 4/5] cryptodev: remove list ends from asymmetric crypto api
  2020-10-08 19:51  0%   ` Akhil Goyal
@ 2020-10-09  7:02  0%     ` Kusztal, ArkadiuszX
  0 siblings, 0 replies; 200+ results
From: Kusztal, ArkadiuszX @ 2020-10-09  7:02 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Trahe, Fiona, ruifeng.wang, michaelsh, Anoob Joseph

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: czwartek, 8 października 2020 21:51
> To: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; dev@dpdk.org
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; ruifeng.wang@arm.com;
> michaelsh@marvell.com
> Subject: RE: [PATCH v2 4/5] cryptodev: remove list ends from asymmetric crypto
> api
> 
> Hi Arek/Fiona,
> 
> > This patch removes RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END,
> > RTE_CRYPTO_ASYM_OP_LIST_END,
> > RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
> > enumerators from asymmetric crypto API. When asymmetric API will no
> > more be experimental adding new entries will be possible without ABI
> > breakage.
> 
> I believe XFORM_TYPE, ASYM_OP, and PADDING_TYPE are not going to Change
> in near future. Hence LIST_END should not be removed from these Enums.
> Adding a LIST END has its own benefits and we should not remove that until we
> have a solid reason for it. Moreover, these are experimental.
> We should revisit these when we think ASYM is stable.

As for XFORM_TYPE it could be extended by ECDH, even if ECPM is present (as we have DH op enums), I think EdDSA can have its own enum as well.
As for asym_op I don't know which way it will go as RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE is distinct case as it is more about generation than computation, this could be clarified in future.
As for rsa padding, I agree it fulfills today requirements.

Though yes, since it is experimental I will remove asym patch from v3 patchset.

> 
> IMO, we should only remove list ends in algo types.
> 
> Regards,
> Akhil

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
  @ 2020-10-09  6:54  0%           ` Ruifeng Wang
  2020-10-13 13:53  0%             ` Kevin Traynor
  0 siblings, 1 reply; 200+ results
From: Ruifeng Wang @ 2020-10-09  6:54 UTC (permalink / raw)
  To: Kevin Traynor, Medvedkin, Vladimir, Bruce Richardson
  Cc: dev, Honnappa Nagarahalli, nd, nd


> -----Original Message-----
> From: Kevin Traynor <ktraynor@redhat.com>
> Sent: Wednesday, September 30, 2020 4:46 PM
> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>; Bruce Richardson
> <bruce.richardson@intel.com>
> Cc: dev@dpdk.org; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
> 
> On 16/09/2020 04:17, Ruifeng Wang wrote:
> >
> >> -----Original Message-----
> >> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> >> Sent: Wednesday, September 16, 2020 12:28 AM
> >> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
> >> <Ruifeng.Wang@arm.com>
> >> Cc: dev@dpdk.org; Honnappa Nagarahalli
> >> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> >> Subject: Re: [PATCH 2/2] lpm: hide internal data
> >>
> >> Hi Ruifeng,
> >>
> >> On 15/09/2020 17:02, Bruce Richardson wrote:
> >>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
> >>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
> >>>> be exposed to the user.
> >>>> Hide the unneeded exposure of structure fields for better ABI
> >>>> maintainability.
> >>>>
> >>>> Suggested-by: David Marchand <david.marchand@redhat.com>
> >>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> >>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
> >>>> ---
> >>>>   lib/librte_lpm/rte_lpm.c | 152
> >>>> +++++++++++++++++++++++---------------
> >> -
> >>>>   lib/librte_lpm/rte_lpm.h |   7 --
> >>>>   2 files changed, 91 insertions(+), 68 deletions(-)
> >>>>
> >>> <snip>
> >>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> >>>> index 03da2d37e..112d96f37 100644
> >>>> --- a/lib/librte_lpm/rte_lpm.h
> >>>> +++ b/lib/librte_lpm/rte_lpm.h
> >>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
> >>>>
> >>>>   /** @internal LPM structure. */
> >>>>   struct rte_lpm {
> >>>> -	/* LPM metadata. */
> >>>> -	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
> >>>> -	uint32_t max_rules; /**< Max. balanced rules per lpm. */
> >>>> -	uint32_t number_tbl8s; /**< Number of tbl8s. */
> >>>> -	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
> >> Rule info table. */
> >>>> -
> >>>>   	/* LPM Tables. */
> >>>>   	struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
> >>>>   			__rte_cache_aligned; /**< LPM tbl24 table. */
> >>>>   	struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
> >>>> -	struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> >>>>   };
> >>>>
> >>>
> >>> Since this changes the ABI, does it not need advance notice?
> >>>
> >>> [Basically the return value point from rte_lpm_create() will be
> >>> different, and that return value could be used by rte_lpm_lookup()
> >>> which as a static inline function will be in the binary and using
> >>> the old structure offsets.]
> >>>
> >>
> >> Agree with Bruce, this patch breaks ABI, so it can't be accepted
> >> without prior notice.
> >>
> > So if the change wants to happen in 20.11, a deprecation notice should
> > have been added in 20.08.
> > I should have added a deprecation notice. This change will have to wait for
> next ABI update window.
> >
> 
> Do you plan to extend? or is this just speculative?
It is speculative.

> 
> A quick scan and there seems to be several projects using some of these
> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
> gatekeeper. I didn't look at the details to see if they are really needed.
> 
> Not sure how much notice they'd need or if they update DPDK much, but I
> think it's worth having a closer look as to how they use lpm and what the
> impact to them is.
Checked the projects listed above. BESS, NFF-Go and DPVS don't access the members to be hided.
They will not be impacted by this patch.
But Gatekeeper accesses the rte_lpm internal members that to be hided. Its compilation will be broken with this patch.

> 
> > Thanks.
> > Ruifeng
> >>>>   /** LPM RCU QSBR configuration structure. */
> >>>> --
> >>>> 2.17.1
> >>>>
> >>
> >> --
> >> Regards,
> >> Vladimir


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] Techboard Minutes of Meeting - 10/8/2020
@ 2020-10-08 23:37  4% Honnappa Nagarahalli
  0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2020-10-08 23:37 UTC (permalink / raw)
  To: techboard; +Cc: dev, nd, nd

Members Attending:
---------------------------

     Bruce Richardson
     Ferruh Yigit
     Honnappa Nagarahalli (Chair)
     Jerin Jacob
     Kevin Traynor
     Konstantin Ananyev
     Maxime Coquelin
     Olivier Matz
     Stephen Hemminger
     Thomas Monjalon

The technical board meetings takes place every second Wednesday on https://meet.jit.si/DPDK, at 3pm UTC.
(Trial runs on jitsi, backup on IRC channel #dpdk-board) Meetings are public and DPDK community members are welcome to attend.

Next meeting will be on 2020-10-21, and will be chaired by Jerin.

Minutes:
-----------

#1 ABI and API breaks in 20.11 - Pending requests to Techboard?
    a) stats per queues in common stats
	- Use xstats as the alternative
	- Techboard decided to remove (deprecation notice in 20.11 and removed in 21.11) the queue stats from common stats (array at the end of the structure)
		-  Suggestion to do a POC to ensure there are no other dependencies
		- AI: Ferruh: Communicate to the mailing list and send the deprecation email. Add documentation to indicate that the array size is limited to 256
	- Telemetry needs to be updated using xstats only
	- AI: Thomas: Improve xstats documentation and fix the underscore typo

    b) pmdinfogen rewrite in Python
	- Creates a dependency for build on latest version of Python
	- CI needs to be updated to use the Python package
	- AI: Maxime to check the dependencies for RedHat

    c) minimum meson version
	- Latest version of meson, 0.55, is not packaged with distros
	- AI: Bruce to work with patch owner to check if an older version of meson can be used to implement these new features
	- REL packages 0.49.2 version of meson.

    d) LPM
	- Looks like other projects are using the LPM structure members, not just APIs.
	- AI: Honnappa: will take a closer look.

#2 Asks from Governing Board
    a) Ask from Trishan for Governing board call on 2021 priorities. What Technical board want to achieve in 2021?
        * Inspiration and actionable goals that are on the roadmap
        * API/ABI support plans, new capabilities, CI testing integration, gaps that the Techboard would like address etc...
	- DTS
	- Documentation
	- ABI work to continue - increase the ABI stability for CryptoDev APIs
	- Reduce barriers to adopt DPDK
		- DPDK Interaction with Kernel
		- Make it easier to pass arguments to DPDK initialization
	- Using DPDK in containers

    b) Review of Technical Section of DPDK "Golden Deck".  https://docs.google.com/presentation/d/1xD5R_WN8xMjEFcb6nBarHWNMRI5nrE0g-poKqc0M0HA/edit#slide=id.g9118ec1251_0_0

#3 Do we need a reviewer role in the process?
      Techboard members to discuss this on email.
      AI: Honnappa will initiate an email to Techboard.

#4 Call for reviews - ethdev and testpmd patches need review

#5 Next governing board meeting is the last meeting for Bruce to represent the tech board. It is Ferruh's turn next.

Agenda items not discussed:

#1 Update on DTS usability
    a) UNH has started the document

#2 Security process issues
    a) Ferruh/Stephen: Anything to discuss?

#3 DMARC mitigation in the mailing list

#4 Prepare the CFP text for a virtual Asia event in January 2021


Thank you,
Honnappa

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 3/5] cryptodev: remove crypto list end enumerators
  @ 2020-10-08 19:58  3%   ` Akhil Goyal
  2020-10-12  5:15  0%     ` Kusztal, ArkadiuszX
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-08 19:58 UTC (permalink / raw)
  To: Arek Kusztal, dev; +Cc: fiona.trahe, ruifeng.wang, michaelsh

> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> b/lib/librte_cryptodev/rte_crypto_sym.h
> index f29c98051..7a2556a9e 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -132,15 +132,12 @@ enum rte_crypto_cipher_algorithm {
>  	 * for m_src and m_dst in the rte_crypto_sym_op must be NULL.
>  	 */
> 
> -	RTE_CRYPTO_CIPHER_DES_DOCSISBPI,
> +	RTE_CRYPTO_CIPHER_DES_DOCSISBPI
>  	/**< DES algorithm using modes required by
>  	 * DOCSIS Baseline Privacy Plus Spec.
>  	 * Chained mbufs are not supported in this mode, i.e. rte_mbuf.next
>  	 * for m_src and m_dst in the rte_crypto_sym_op must be NULL.
>  	 */
> -
> -	RTE_CRYPTO_CIPHER_LIST_END
> -
>  };

Probably we should add a comment for each of the enums that we change,
that the user can define its own LIST_END = last item in the enum +1.
LIST_END is not added to avoid ABI breakage across releases when new algos
are added.

> 
>  /** Cipher algorithm name strings */
> @@ -312,10 +309,8 @@ enum rte_crypto_auth_algorithm {
>  	/**< HMAC using 384 bit SHA3 algorithm. */
>  	RTE_CRYPTO_AUTH_SHA3_512,
>  	/**< 512 bit SHA3 algorithm. */
> -	RTE_CRYPTO_AUTH_SHA3_512_HMAC,
> +	RTE_CRYPTO_AUTH_SHA3_512_HMAC
>  	/**< HMAC using 512 bit SHA3 algorithm. */
> -
> -	RTE_CRYPTO_AUTH_LIST_END
>  };
> 
>  /** Authentication algorithm name strings */
> @@ -412,9 +407,8 @@ enum rte_crypto_aead_algorithm {
>  	/**< AES algorithm in CCM mode. */
>  	RTE_CRYPTO_AEAD_AES_GCM,
>  	/**< AES algorithm in GCM mode. */
> -	RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
> +	RTE_CRYPTO_AEAD_CHACHA20_POLY1305
>  	/**< Chacha20 cipher with poly1305 authenticator */
> -	RTE_CRYPTO_AEAD_LIST_END
>  };
> 
>  /** AEAD algorithm name strings */
> --
> 2.17.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 4/5] cryptodev: remove list ends from asymmetric crypto api
  @ 2020-10-08 19:51  0%   ` Akhil Goyal
  2020-10-09  7:02  0%     ` Kusztal, ArkadiuszX
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-08 19:51 UTC (permalink / raw)
  To: Arek Kusztal, dev; +Cc: fiona.trahe, ruifeng.wang, michaelsh

Hi Arek/Fiona,

> This patch removes RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END,
> RTE_CRYPTO_ASYM_OP_LIST_END,
> RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
> enumerators from asymmetric crypto API. When asymmetric API will
> no more be experimental adding new entries will be possible without
> ABI breakage.

I believe XFORM_TYPE, ASYM_OP, and PADDING_TYPE are not going to
Change in near future. Hence LIST_END should not be removed from these
Enums. Adding a LIST END has its own benefits and we should not remove
that until we have a solid reason for it. Moreover, these are experimental.
We should revisit these when we think ASYM is stable.

IMO, we should only remove list ends in algo types.

Regards,
Akhil

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error packets
  2020-10-08  8:55  0%                 ` Jerin Jacob
@ 2020-10-08 15:13  0%                   ` Asaf Penso
  0 siblings, 0 replies; 200+ results
From: Asaf Penso @ 2020-10-08 15:13 UTC (permalink / raw)
  To: Jerin Jacob, Nipun Gupta
  Cc: Stephen Hemminger, dpdk-dev, NBU-Contact-Thomas Monjalon,
	Ferruh Yigit, Andrew Rybchenko, Hemant Agrawal, Sachin Saxena,
	Rohit Raj

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Jerin Jacob
>Sent: Thursday, October 8, 2020 11:56 AM
>To: Nipun Gupta <nipun.gupta@nxp.com>
>Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
><dev@dpdk.org>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
>Ferruh Yigit <ferruh.yigit@intel.com>; Andrew Rybchenko
><arybchenko@solarflare.com>; Hemant Agrawal
><hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>;
>Rohit Raj <rohit.raj@nxp.com>
>Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
>packets
>
>On Thu, Oct 8, 2020 at 2:23 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>>
>>
>>
>> > -----Original Message-----
>> > From: Jerin Jacob <jerinjacobk@gmail.com>
>> > Sent: Tuesday, October 6, 2020 6:44 PM
>> > To: Nipun Gupta <nipun.gupta@nxp.com>
>> > Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
>> > <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>; Ferruh
>Yigit
>> > <ferruh.yigit@intel.com>; Andrew Rybchenko
>> > <arybchenko@solarflare.com>; Hemant Agrawal
>> > <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>;
>> > Rohit Raj <rohit.raj@nxp.com>
>> > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to
>> > drop error packets
>> >
>> > On Tue, Oct 6, 2020 at 6:40 PM Nipun Gupta <nipun.gupta@nxp.com>
>wrote:
>> > >
>> > >
>> > >
>> > > > -----Original Message-----
>> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
>> > > > Sent: Tuesday, October 6, 2020 5:31 PM
>> > > > To: Nipun Gupta <nipun.gupta@nxp.com>
>> > > > Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
>> > > > <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>;
>Ferruh
>> > > > Yigit <ferruh.yigit@intel.com>; Andrew Rybchenko
>> > <arybchenko@solarflare.com>;
>> > > > Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
>> > > > <sachin.saxena@nxp.com>; Rohit Raj <rohit.raj@nxp.com>
>> > > > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to
>> > > > drop error packets
>> > > >
>> > > > On Tue, Oct 6, 2020 at 4:07 PM Nipun Gupta <nipun.gupta@nxp.com>
>wrote:
>> > > > >
>> > > > >
>> > > > >
>> > > > > > -----Original Message-----
>> > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
>> > > > > > Sent: Monday, October 5, 2020 9:40 PM
>> > > > > > To: Stephen Hemminger <stephen@networkplumber.org>
>> > > > > > Cc: Nipun Gupta <nipun.gupta@nxp.com>; dpdk-dev
>> > > > > > <dev@dpdk.org>;
>> > > > Thomas
>> > > > > > Monjalon <thomas@monjalon.net>; Ferruh Yigit
>> > <ferruh.yigit@intel.com>;
>> > > > > > Andrew Rybchenko <arybchenko@solarflare.com>; Hemant
>Agrawal
>> > > > > > <hemant.agrawal@nxp.com>; Sachin Saxena
>> > > > > > <sachin.saxena@nxp.com>;
>> > > > Rohit
>> > > > > > Raj <rohit.raj@nxp.com>
>> > > > > > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx
>> > > > > > offload to drop
>> > error
>> > > > > > packets
>> > > > > >
>> > > > > > On Mon, Oct 5, 2020 at 9:05 PM Stephen Hemminger
>> > > > > > <stephen@networkplumber.org> wrote:
>> > > > > > >
>> > > > > > > On Mon,  5 Oct 2020 12:45:04 +0530 nipun.gupta@nxp.com
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > > > From: Nipun Gupta <nipun.gupta@nxp.com>
>> > > > > > > >
>> > > > > > > > This change adds a RX offload capability, which once
>> > > > > > > > enabled, hardware will drop the packets in case there of
>> > > > > > > > any error in the packet such as L3 checksum error or L4
>checksum.
>> > > > > >
>> > > > > > IMO, Providing additional support up to the level to choose
>> > > > > > the errors to drops give more control to the application.
>> > > > > > Meaning,
>> > > > > > L1 errors such as FCS error
>> > > > > > L2 errors ..
>> > > > > > L3 errors such checksum
>> > > > > > i.e ethdev spec need to have  error level supported by PMD
>> > > > > > and the application can set the layers interested to drop.
>> > > > >
>> > > > > Agree, but 'DEV_RX_OFFLOAD_ERR_PKT_DROP' shall also be there
>> > > > > to drop
>> > all
>> > > > the
>> > > > > error packets? Maybe we can rename it to
>> > > > DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP.
>> > > >
>> > > > IMHO,  we introduce such shortcut for a single flag for all err
>> > > > drop then we can not change the scheme without an API/ABI break.
>> > >
>> > > Are the following offloads fine:
>> > >         DEV_RX_OFFLOAD_L1_FCS_ERR_PKT_DROP
>> > >         DEV_RX_OFFLOAD_L3_CSUM_ERR_PKT_DROP
>> > >         DEV_RX_OFFLOAD_L4_CSUM_ERR_PKT_DROP
>> > >         DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP
>> > >
>> > > Please let me know in case I need to add any other too.
>> >
>> > I think, single offload flags and some config/capability structure
>> > to define the additional layer selection would be good, instead of
>> > adding a lot of new offload flags.
>>
>>
>> +/**
>> + * A structure used to enable/disable error packet drop on Rx.
>> + */
>> +struct rte_rx_err_pkt_drop_conf {
>> +       /** enable/disable all RX error packet drop.
>> +        * 0 (default) - disable, 1 enable
>> +        */
>> +       uint32_t all:1;
>> +};
>> +
>>  /**
>>   * A structure used to configure an Ethernet port.
>>   * Depending upon the RX multi-queue mode, extra advanced @@ -1236,6
>> +1246,8 @@ struct rte_eth_conf {
>>         uint32_t dcb_capability_en;
>>         struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
>>         struct rte_intr_conf intr_conf; /**< Interrupt mode
>> configuration. */
>> +       struct rte_rx_err_pkt_drop_conf err_pkt_drop_conf;
>> +       /**< RX error packet drop configuration. */
>>
>> Is this the kind of changes you are talking about?
>
>
>Yes.
>
>>
>> Also, more changes will be there in 'struct rte_eth_dev_info'
>> structure, defining additional separate capability something like 'uint64_t
>rx_err_drop_offload_capa'.
>>
>> Regards,
>> Nipun
>>
>> >
>> >
>> > > Ill send a v3.
>> > >
>> > > Thanks,
>> > > Nipun
>> > >
>> > > >
>> > > > >
>> > > > > Currently we have not planned to add separate knobs for
>> > > > > separate error in the driver, maybe we can define them
>> > > > > separately, or we need have them in this series itself?
>> > > >
>> > > > I think, ethdev API can have the capability on what are levels
>> > > > it supported, in your driver case, you can express the same.
>> > > >
>> > > >
>> > > > >
>> > > > > >
>> > > > > > > >
>> > > > > > > > Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
>> > > > > > > > Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
>> > > > > > > > ---
>> > > > > > > > These patches are based over series:
>> > > > > > > >
>> > > > > >
>> > > >
>> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpa
>> > tchwo
>> > > > > >
>> > > >
>> >
>rk.dpdk.org%2Fpatch%2F78630%2F&amp;data=02%7C01%7Cnipun.gupta%40
>nx
>> > > > > >
>> > > >
>> >
>p.com%7C90b516fd465c48945e7008d869492b3e%7C686ea1d3bc2b4c6fa92cd9
>> > > > > >
>> > > >
>> >
>9c5c301635%7C0%7C0%7C637375110263097933&amp;sdata=RBQswMBsfpM6
>> > > > > >
>nyKur%2FaHvOMvNK7RU%2BRyhHt%2FXBsP1OM%3D&amp;reserved=0
>> > > > > > > >
>> > > > > > > > Changes in v2:
>> > > > > > > >  - Add support in DPAA1 driver (patch 2/3)
>> > > > > > > >  - Add support and config parameter in testpmd (patch
>> > > > > > > > 3/3)
>> > > > > > > >
>> > > > > > > >  lib/librte_ethdev/rte_ethdev.h | 1 +
>> > > > > > > >  1 file changed, 1 insertion(+)
>> > > > > > >
>> > > > > > > Maybe this should be an rte_flow match/action which would
>> > > > > > > then make
>> > it
>> > > > > > > more flexible?
>> > > > > >
>> > > > > > I think, it is not based on any Patten matching. So IMO, it
>> > > > > > should be best
>> > if it
>> > > > > > is part of RX offload.
>> > > > > >
>> > > > > > >
>> > > > > > > There is not much of a performance gain for this in real
>> > > > > > > life and if only one driver supports it then I am not convinced this
>is needed.
>> > > > > >
>> > > > > > Marvell HW has this feature.
Reviewed-By: Asaf Penso <asafp@nvidia.com>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 6/6] doc: update for two ports hairpin mode
  @ 2020-10-08 12:05  5%     ` Bing Zhao
    1 sibling, 0 replies; 200+ results
From: Bing Zhao @ 2020-10-08 12:05 UTC (permalink / raw)
  To: thomas, orika, ferruh.yigit, arybchenko, mdr, nhorman,
	bernard.iremonger, beilei.xing, wenzhuo.lu
  Cc: dev

In the release notes, 2 ports hairpin mode feature is added.

In rte flow part, one suggestion is added to mention that metadata
could be used to connect the hairpin RX and TX flows if the hairpin
is working in explicit TX flow rule mode.

In the testpmd command line, the new parameter to set hairpin working
mode is described.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 3 +++
 doc/guides/rel_notes/release_20_11.rst | 8 ++++++++
 doc/guides/testpmd_app_ug/run_app.rst  | 8 ++++++++
 3 files changed, 19 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 119b128..bb54d67 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2592,6 +2592,9 @@ set, unpredictable value will be seen depending on driver implementation. For
 loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to
 the other path depending on HW capability.
 
+In hairpin case with TX explicit flow mode, metadata could (not mandatory) be
+used to connect the RX and TX flows if it can be propagated from RX to TX path.
+
 .. _table_rte_flow_action_set_meta:
 
 .. table:: SET_META
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0b2a370..05ceea0 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -109,6 +109,10 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Updated the ethdev library to support hairpin between two ports.**
+
+  New APIs are introduced to support binding / unbinding 2 ports hairpin.
+  Hairpin TX part flow rules can be inserted explicitly.
 
 Removed Items
 -------------
@@ -240,6 +244,10 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * ``struct rte_eth_hairpin_conf`` has two new members:
+
+    * ``uint32_t tx_explicit:1;``
+    * ``uint32_t manual_bind:1;``
 
 Known Issues
 ------------
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index e2539f6..4e627c4 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -497,3 +497,11 @@ The command line options are:
 *   ``--record-burst-stats``
 
     Enable display of RX and TX burst stats.
+
+*   ``--hairpin-mode=0xXX``
+
+    Set the hairpin port mode with bitmask, only valid when hairpin queues number is set.
+    bit 4 - explicit TX flow rule
+    bit 1 - two hairpin ports paired
+    bit 0 - two hairpin ports loop
+    The default value is 0. Hairpin will use single port mode and implicit TX flow mode.
-- 
1.8.3.1


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid
  @ 2020-10-08 11:55  3% ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-10-08 11:55 UTC (permalink / raw)
  To: Ciara Loftus, dev; +Cc: techboard

On 10/8/2020 10:17 AM, Ciara Loftus wrote:
> Supporting this would require locks, which would impact the performance of
> the more typical cases - xsks with different qids and netdevs.
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM")

Off the topic.

This is the patch 80000 in patchwork! This was fast.
Thanks to everyone who contributed!

https://patches.dpdk.org/patch/80000/

The historical numbers from DPDK patchwork:
80000 - Oct.   8, 2020 (153 days) [ 5 months / 21 weeks and 6 days ]
70000 - May    8, 2020 (224 days)
60000 - Sept. 27, 2019 (248 days)
50000 - Jan.  22, 2019 (253 days)
40000 - May   14, 2018 (217 days)
30000 - Oct.   9, 2017 (258 days)
20000 - Jan.  25, 2017 (372 days)
10000 - Jan.  20, 2016 (645 days)
00001 - April 16, 2014


This is the fastest 10K of patches, the average ~250 days for 10K patch seems 
passed by 100 days.
v20.11 being ABI break release must have helped it but meanwhile the big 
difference may mean project is still growing.

Just to put into another perspective, this means continuous 65 patches in 
average each day for last a few months. Thanks again to all contributors.

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH V16 1/3] ethdev: introduce FEC API
  @ 2020-10-08 10:02  2%   ` Min Hu (Connor)
  0 siblings, 0 replies; 200+ results
From: Min Hu (Connor) @ 2020-10-08 10:02 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, thomas, arybchenko, ferruh.yigit, linuxarm

This patch adds Forward error correction(FEC) support for ethdev.
Introduce APIs which support query and config FEC information in
hardware.

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chengchang Tang <tangchengchang@huawei.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
v14->v15:
change mode to fec_capa in fec set API.

---
v13->v14:
change mode to fec_capa.
fix comment about API.

---
v12->v13:
change fec get capa API.
fix comment styles.

---
v10->v11:
allow to report capabilities per link speed.
specify what should be reported if link is down
when get FEC.
change mode to capa bitmask.

---
v9->v10:
add macro RTE_ETH_FEC_MODE_CAPA_MASK(x) to indicate
different FEC mode capa.

---
v8->v9:
added reviewed-by and acked-by.

---
v7->v8:
put AUTO just after NOFEC in rte_fec_mode definition.

---
v6->v7:
deleted RTE_ETH_FEC_NUM to prevent ABI breakage.
add new macro to indicate translation from fec mode
to capa.

---
v5->v6:
modified release notes.
deleted check duplicated for FEC API
fixed code styles according to DPDK coding style.
added _eth prefix.

---
v4->v5:
Modifies FEC capa definitions using macros.
Add RTE_ prefix for public FEC mode enum.
add release notes about FEC for dpdk20_11.

---
v2->v3:
add function return value "-ENOTSUP" for API.

---
 doc/guides/rel_notes/release_20_11.rst   |   5 ++
 lib/librte_ethdev/rte_ethdev.c           |  44 +++++++++++++
 lib/librte_ethdev/rte_ethdev.h           | 105 +++++++++++++++++++++++++++++++
 lib/librte_ethdev/rte_ethdev_driver.h    |  88 ++++++++++++++++++++++++++
 lib/librte_ethdev/rte_ethdev_version.map |   3 +
 5 files changed, 245 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c6642f5..1f04bd5 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -78,6 +78,11 @@ New Features
     ``--portmask=N``
     where N represents the hexadecimal bitmask of ports used.
 
+* **Added the FEC API, for a generic FEC query and config.**
+
+  Added the FEC API which provides functions for query FEC capabilities and
+  current FEC mode from device. Also, API for configuring FEC mode is also provided.
+
 
 Removed Items
 -------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index dfe5c1b..ca596c1 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -3679,6 +3679,50 @@ rte_eth_led_off(uint16_t port_id)
 	return eth_err(port_id, (*dev->dev_ops->dev_led_off)(dev));
 }
 
+int
+rte_eth_fec_get_capability(uint16_t port_id,
+			   struct rte_eth_fec_capa *speed_fec_capa,
+			   unsigned int num)
+{
+	struct rte_eth_dev *dev;
+	int ret;
+
+	if (speed_fec_capa == NULL && num > 0)
+		return -EINVAL;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->fec_get_capability, -ENOTSUP);
+	ret = (*dev->dev_ops->fec_get_capability)(dev, speed_fec_capa, num);
+
+	return ret;
+}
+
+int
+rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa)
+{
+	struct rte_eth_dev *dev;
+
+	if (fec_capa == NULL)
+		return -EINVAL;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->fec_get, -ENOTSUP);
+	return eth_err(port_id, (*dev->dev_ops->fec_get)(dev, fec_capa));
+}
+
+int
+rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa)
+{
+	struct rte_eth_dev *dev;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->fec_set, -ENOTSUP);
+	return eth_err(port_id, (*dev->dev_ops->fec_set)(dev, fec_capa));
+}
+
 /*
  * Returns index into MAC address array of addr. Use 00:00:00:00:00:00 to find
  * an empty spot.
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 645a186..7938202 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1544,6 +1544,29 @@ struct rte_eth_dcb_info {
 	struct rte_eth_dcb_tc_queue_mapping tc_queue;
 };
 
+/**
+ * This enum indicates the possible Forward Error Correction (FEC) modes
+ * of an ethdev port.
+ */
+enum rte_eth_fec_mode {
+	RTE_ETH_FEC_NOFEC = 0,      /**< FEC is off */
+	RTE_ETH_FEC_AUTO,	    /**< FEC autonegotiation modes */
+	RTE_ETH_FEC_BASER,          /**< FEC using common algorithm */
+	RTE_ETH_FEC_RS,             /**< FEC using RS algorithm */
+};
+
+/* Translate from FEC mode to FEC capa */
+#define RTE_ETH_FEC_MODE_TO_CAPA(x)	(1U << (x))
+
+/* This macro indicates FEC capa mask */
+#define RTE_ETH_FEC_MODE_CAPA_MASK(x)	(1U << (RTE_ETH_FEC_ ## x))
+
+/* A structure used to get capabilities per link speed */
+struct rte_eth_fec_capa {
+	uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+	uint32_t capa;  /**< FEC capabilities bitmask */
+};
+
 #define RTE_ETH_ALL RTE_MAX_ETHPORTS
 
 /* Macros to check for valid port */
@@ -3397,6 +3420,88 @@ int  rte_eth_led_on(uint16_t port_id);
 int  rte_eth_led_off(uint16_t port_id);
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Get Forward Error Correction(FEC) capability.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param speed_fec_capa
+ *   speed_fec_capa is out only with per-speed capabilities.
+ *   If set to NULL, the function returns the required number
+ *   of required array entries.
+ * @param num
+ *   a number of elements in an speed_fec_capa array.
+ *
+ * @return
+ *   - A non-negative value lower or equal to num: success. The return value
+ *     is the number of entries filled in the fec capa array.
+ *   - A non-negative value higher than num: error, the given fec capa array
+ *     is too small. The return value corresponds to the num that should
+ *     be given to succeed. The entries in fec capa array are not valid and
+ *     shall not be used by the caller.
+ *   - (-ENOTSUP) if underlying hardware OR driver doesn't support.
+ *     that operation.
+ *   - (-EIO) if device is removed.
+ *   - (-ENODEV)  if *port_id* invalid.
+ *   - (-EINVAL)  if *num* or *speed_fec_capa* invalid
+ */
+__rte_experimental
+int rte_eth_fec_get_capability(uint16_t port_id,
+			       struct rte_eth_fec_capa *speed_fec_capa,
+			       unsigned int num);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Get current Forward Error Correction(FEC) mode.
+ * If link is down and AUTO is enabled, AUTO is returned, otherwise,
+ * configured FEC mode is returned.
+ * If link is up, current FEC mode is returned.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param fec_capa
+ *   A bitmask of enabled FEC modes. If AUTO bit is set, other
+ *   bits specify FEC modes which may be negotiated. If AUTO
+ *   bit is clear, specify FEC modes to be used (only one valid
+ *   mode per speed may be set).
+ * @return
+ *   - (0) if successful.
+ *   - (-ENOTSUP) if underlying hardware OR driver doesn't support.
+ *     that operation.
+ *   - (-EIO) if device is removed.
+ *   - (-ENODEV)  if *port_id* invalid.
+ */
+__rte_experimental
+int rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Set Forward Error Correction(FEC) mode.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param fec_capa
+ *   A bitmask of allowed FEC modes. If AUTO bit is set, other
+ *   bits specify FEC modes which may be negotiated. If AUTO
+ *   bit is clear, specify FEC modes to be used (only one valid
+ *   mode per speed may be set).
+ * @return
+ *   - (0) if successful.
+ *   - (-EINVAL) if the FEC mode is not valid.
+ *   - (-ENOTSUP) if underlying hardware OR driver doesn't support.
+ *   - (-EIO) if device is removed.
+ *   - (-ENODEV)  if *port_id* invalid.
+ */
+__rte_experimental
+int rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa);
+
+/**
  * Get current status of the Ethernet link flow control for Ethernet device
  *
  * @param port_id
diff --git a/lib/librte_ethdev/rte_ethdev_driver.h b/lib/librte_ethdev/rte_ethdev_driver.h
index 23cc1e0..f147a67 100644
--- a/lib/librte_ethdev/rte_ethdev_driver.h
+++ b/lib/librte_ethdev/rte_ethdev_driver.h
@@ -575,6 +575,87 @@ typedef int (*eth_tx_hairpin_queue_setup_t)
 	 const struct rte_eth_hairpin_conf *hairpin_conf);
 
 /**
+ * @internal
+ * Get Forward Error Correction(FEC) capability.
+ *
+ * @param dev
+ *   ethdev handle of port.
+ * @param speed_fec_capa
+ *   speed_fec_capa is out only with per-speed capabilities.
+ * @param num
+ *   a number of elements in an speed_fec_capa array.
+ *
+ * @return
+ *   Negative errno value on error, positive value on success.
+ *
+ * @retval positive value
+ *   A non-negative value lower or equal to num: success. The return value
+ *   is the number of entries filled in the fec capa array.
+ *   A non-negative value higher than num: error, the given fec capa array
+ *   is too small. The return value corresponds to the num that should
+ *   be given to succeed. The entries in the fec capa array are not valid
+ *   and shall not be used by the caller.
+ * @retval -ENOTSUP
+ *   Operation is not supported.
+ * @retval -EIO
+ *   Device is removed.
+ * @retval -EINVAL
+ *   *num* or *speed_fec_capa* invalid.
+ */
+typedef int (*eth_fec_get_capability_t)(struct rte_eth_dev *dev,
+		struct rte_eth_fec_capa *speed_fec_capa, unsigned int num);
+
+/**
+ * @internal
+ * Get Forward Error Correction(FEC) mode.
+ *
+ * @param dev
+ *   ethdev handle of port.
+ * @param fec_capa
+ *   a bitmask of enabled FEC modes. If AUTO bit is set, other
+ *   bits specify FEC modes which may be negotiated. If AUTO
+ *   bit is clear, specify FEC modes to be used (only one valid
+ *   mode per speed may be set).
+ *
+ * @return
+ *   Negative errno value on error, 0 on success.
+ *
+ * @retval 0
+ *   Success, get FEC success.
+ * @retval -ENOTSUP
+ *   Operation is not supported.
+ * @retval -EIO
+ *   Device is removed.
+ */
+typedef int (*eth_fec_get_t)(struct rte_eth_dev *dev,
+			     uint32_t *fec_capa);
+
+/**
+ * @internal
+ * Set Forward Error Correction(FEC) mode.
+ *
+ * @param dev
+ *   ethdev handle of port.
+ * @param fec_capa
+ *   bitmask of allowed FEC modes. It must be only one
+ *   if AUTO is disabled. If AUTO is enabled, other
+ *   bits specify FEC modes which may be negotiated.
+ *
+ * @return
+ *   Negative errno value on error, 0 on success.
+ *
+ * @retval 0
+ *   Success, set FEC success.
+ * @retval -ENOTSUP
+ *   Operation is not supported.
+ * @retval -EINVAL
+ *   Unsupported FEC mode requested.
+ * @retval -EIO
+ *   Device is removed.
+ */
+typedef int (*eth_fec_set_t)(struct rte_eth_dev *dev, uint32_t fec_capa);
+
+/**
  * @internal A structure containing the functions exported by an Ethernet driver.
  */
 struct eth_dev_ops {
@@ -713,6 +794,13 @@ struct eth_dev_ops {
 	/**< Set up device RX hairpin queue. */
 	eth_tx_hairpin_queue_setup_t tx_hairpin_queue_setup;
 	/**< Set up device TX hairpin queue. */
+
+	eth_fec_get_capability_t fec_get_capability;
+	/**< Get Forward Error Correction(FEC) capability. */
+	eth_fec_get_t fec_get;
+	/**< Get Forward Error Correction(FEC) mode. */
+	eth_fec_set_t fec_set;
+	/**< Set Forward Error Correction(FEC) mode. */
 };
 
 /**
diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
index c95ef51..b9ace3a 100644
--- a/lib/librte_ethdev/rte_ethdev_version.map
+++ b/lib/librte_ethdev/rte_ethdev_version.map
@@ -229,6 +229,9 @@ EXPERIMENTAL {
 	# added in 20.11
 	rte_eth_link_speed_to_str;
 	rte_eth_link_to_str;
+	rte_eth_fec_get_capability;
+	rte_eth_fec_get;
+	rte_eth_fec_set;
 };
 
 INTERNAL {
-- 
2.7.4


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v6 07/25] raw/ioat: rename functions to be operation-agnostic
  @ 2020-10-08  9:51  3%   ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-10-08  9:51 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Since the hardware supported by the ioat driver is capable of operations
other than just copies, we can rename the doorbell and completion-return
functions to not have "copies" in their names. These functions are not
copy-specific, and so would apply for other operations which may be added
later to the driver.

Also add a suitable warning using deprecation attribute for any code using
the old functions names.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            | 16 ++++++++--------
 doc/guides/rel_notes/release_20_11.rst |  9 +++++++++
 doc/guides/sample_app_ug/ioat.rst      |  8 ++++----
 drivers/raw/ioat/ioat_rawdev_test.c    | 12 ++++++------
 drivers/raw/ioat/rte_ioat_rawdev.h     | 17 ++++++++++-------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 20 ++++++++++++++++----
 examples/ioat/ioatfwd.c                |  4 ++--
 lib/librte_eal/include/rte_common.h    |  1 +
 8 files changed, 56 insertions(+), 31 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index af00d77fb..3db5f5d09 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -157,9 +157,9 @@ Performing Data Copies
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 To perform data copies using IOAT rawdev devices, the functions
-``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
+``rte_ioat_enqueue_copy()`` and ``rte_ioat_perform_ops()`` should be used.
 Once copies have been completed, the completion will be reported back when
-the application calls ``rte_ioat_completed_copies()``.
+the application calls ``rte_ioat_completed_ops()``.
 
 The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
 device ring for copying at a later point. The parameters to that function
@@ -172,11 +172,11 @@ pointers if packet data is being copied.
 
 While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
 the device ring, the copy will not actually be performed until after the
-application calls the ``rte_ioat_do_copies()`` function. This function
+application calls the ``rte_ioat_perform_ops()`` function. This function
 informs the device hardware of the elements enqueued on the ring, and the
 device will begin to process them. It is expected that, for efficiency
 reasons, a burst of operations will be enqueued to the device via multiple
-enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
+enqueue calls between calls to the ``rte_ioat_perform_ops()`` function.
 
 The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
 a burst of copies to the device and start the hardware processing of them:
@@ -210,10 +210,10 @@ a burst of copies to the device and start the hardware processing of them:
                         return -1;
                 }
         }
-        rte_ioat_do_copies(dev_id);
+        rte_ioat_perform_ops(dev_id);
 
 To retrieve information about completed copies, the API
-``rte_ioat_completed_copies()`` should be used. This API will return to the
+``rte_ioat_completed_ops()`` should be used. This API will return to the
 application a set of completion handles passed in when the relevant copies
 were enqueued.
 
@@ -223,9 +223,9 @@ is correct before freeing the data buffers using the returned handles:
 
 .. code-block:: C
 
-        if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+        if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
                         (void *)completed_dst) != RTE_DIM(srcs)) {
-                printf("Error with rte_ioat_completed_copies\n");
+                printf("Error with rte_ioat_completed_ops\n");
                 return -1;
         }
         for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 1e73c26d4..e7d038f31 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -121,6 +121,11 @@ New Features
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
   * Added a per-device configuration flag to disable management of user-provided completion handles
+  * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
+    and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
+    to better reflect the APIs' purposes, and remove the implication that
+    they are limited to copy operations only.
+    [Note: The old API is still provided but marked as deprecated in the code]
 
 
 Removed Items
@@ -234,6 +239,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* raw/ioat: As noted above, the ``rte_ioat_do_copies()`` and
+  ``rte_ioat_completed_copies()`` functions have been renamed to
+  ``rte_ioat_perform_ops()`` and ``rte_ioat_completed_ops()`` respectively.
+
 
 ABI Changes
 -----------
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index 3f7d5c34a..964160dff 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -394,7 +394,7 @@ packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring:
                 nb_enq = ioat_enqueue_packets(pkts_burst,
                     nb_rx, rx_config->ioat_ids[i]);
                 if (nb_enq > 0)
-                    rte_ioat_do_copies(rx_config->ioat_ids[i]);
+                    rte_ioat_perform_ops(rx_config->ioat_ids[i]);
             } else {
                 /* Perform packet software copy, free source packets */
                 int ret;
@@ -433,7 +433,7 @@ The packets are received in burst mode using ``rte_eth_rx_burst()``
 function. When using hardware copy mode the packets are enqueued in
 copying device's buffer using ``ioat_enqueue_packets()`` which calls
 ``rte_ioat_enqueue_copy()``. When all received packets are in the
-buffer the copy operations are started by calling ``rte_ioat_do_copies()``.
+buffer the copy operations are started by calling ``rte_ioat_perform_ops()``.
 Function ``rte_ioat_enqueue_copy()`` operates on physical address of
 the packet. Structure ``rte_mbuf`` contains only physical address to
 start of the data buffer (``buf_iova``). Thus the address is adjusted
@@ -490,7 +490,7 @@ or indirect mbufs, then multiple copy operations must be used.
 
 
 All completed copies are processed by ``ioat_tx_port()`` function. When using
-hardware copy mode the function invokes ``rte_ioat_completed_copies()``
+hardware copy mode the function invokes ``rte_ioat_completed_ops()``
 on each assigned IOAT channel to gather copied packets. If software copy
 mode is used the function dequeues copied packets from the rte_ring. Then each
 packet MAC address is changed if it was enabled. After that copies are sent
@@ -510,7 +510,7 @@ in burst mode using `` rte_eth_tx_burst()``.
         for (i = 0; i < tx_config->nb_queues; i++) {
             if (copy_mode == COPY_MODE_IOAT_NUM) {
                 /* Deque the mbufs from IOAT device. */
-                nb_dq = rte_ioat_completed_copies(
+                nb_dq = rte_ioat_completed_ops(
                     tx_config->ioat_ids[i], MAX_PKT_BURST,
                     (void *)mbufs_src, (void *)mbufs_dst);
             } else {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 77f96bba3..439b46c03 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -65,12 +65,12 @@ test_enqueue_copies(int dev_id)
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(10);
 
-		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
@@ -119,12 +119,12 @@ test_enqueue_copies(int dev_id)
 				return -1;
 			}
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(100);
 
-		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+		if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 7067b352f..ae6393951 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -69,24 +69,26 @@ struct rte_ioat_rawdev_config {
  *   Number of operations enqueued, either 0 or 1
  */
 static inline int
+__rte_experimental
 rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		unsigned int length, uintptr_t src_hdl, uintptr_t dst_hdl,
 		int fence);
 
 /**
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  *
  * This API is used to write the "doorbell" to the hardware to trigger it
- * to begin the copy operations previously enqueued by rte_ioat_enqueue_copy()
+ * to begin the operations previously enqueued by rte_ioat_enqueue_copy()
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id);
+__rte_experimental
+rte_ioat_perform_ops(int dev_id);
 
 /**
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  *
  * If the hdls_disable option was not set when the device was configured,
  * the function will return to the caller the user-provided "handles" for
@@ -104,11 +106,11 @@ rte_ioat_do_copies(int dev_id);
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies.
+ *   Array to hold the source handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies.
+ *   Array to hold the destination handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @return
@@ -117,7 +119,8 @@ rte_ioat_do_copies(int dev_id);
  *   to the src_hdls and dst_hdls array parameters.
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+__rte_experimental
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
 /* include the implementation details from a separate file */
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 4b7bdb8e2..b155d79c4 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -83,10 +83,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /*
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
+rte_ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -114,10 +114,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 }
 
 /*
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -165,4 +165,16 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static inline void
+__rte_deprecated_msg("use rte_ioat_perform_ops() instead")
+rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
+
+static inline int
+__rte_deprecated_msg("use rte_ioat_completed_ops() instead")
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	return rte_ioat_completed_ops(dev_id, max_copies, src_hdls, dst_hdls);
+}
+
 #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 288a75c7b..67f75737b 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -406,7 +406,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config)
 			nb_enq = ioat_enqueue_packets(pkts_burst,
 				nb_rx, rx_config->ioat_ids[i]);
 			if (nb_enq > 0)
-				rte_ioat_do_copies(rx_config->ioat_ids[i]);
+				rte_ioat_perform_ops(rx_config->ioat_ids[i]);
 		} else {
 			/* Perform packet software copy, free source packets */
 			int ret;
@@ -452,7 +452,7 @@ ioat_tx_port(struct rxtx_port_config *tx_config)
 	for (i = 0; i < tx_config->nb_queues; i++) {
 		if (copy_mode == COPY_MODE_IOAT_NUM) {
 			/* Deque the mbufs from IOAT device. */
-			nb_dq = rte_ioat_completed_copies(
+			nb_dq = rte_ioat_completed_ops(
 				tx_config->ioat_ids[i], MAX_PKT_BURST,
 				(void *)mbufs_src, (void *)mbufs_dst);
 		} else {
diff --git a/lib/librte_eal/include/rte_common.h b/lib/librte_eal/include/rte_common.h
index 8f487a563..2920255fc 100644
--- a/lib/librte_eal/include/rte_common.h
+++ b/lib/librte_eal/include/rte_common.h
@@ -85,6 +85,7 @@ typedef uint16_t unaligned_uint16_t;
 
 /******* Macro to mark functions and fields scheduled for removal *****/
 #define __rte_deprecated	__attribute__((__deprecated__))
+#define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
 
 /**
  * Mark a function or variable to a weak reference.
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 6/6] doc: update for two ports hairpin mode
  2020-10-08  8:51  5%   ` [dpdk-dev] [PATCH v2 6/6] doc: update for two ports hairpin mode Bing Zhao
@ 2020-10-08  9:47  0%     ` Ori Kam
  0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2020-10-08  9:47 UTC (permalink / raw)
  To: Bing Zhao, NBU-Contact-Thomas Monjalon, ferruh.yigit, arybchenko,
	mdr, nhorman, bernard.iremonger, beilei.xing, wenzhuo.lu
  Cc: dev

Hi Bing,

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Thursday, October 8, 2020 11:52 AM
> Subject: [PATCH v2 6/6] doc: update for two ports hairpin mode
> 
> In the release notes, 2 ports hairpin mode feature is added.
> 
> In rte flow part, one suggestion is added to mention that metadata
> could be used to connect the hairpin RX and TX flows if the hairpin
> is working in explicit TX flow rule mode.
> 
> In the testpmd command line, the new parameter to set hairpin working
> mode is described.
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
>  doc/guides/prog_guide/rte_flow.rst     | 3 +++
>  doc/guides/rel_notes/release_20_11.rst | 8 ++++++++
>  doc/guides/testpmd_app_ug/run_app.rst  | 8 ++++++++
>  3 files changed, 19 insertions(+)
> 
> diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> index 119b128..bb54d67 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -2592,6 +2592,9 @@ set, unpredictable value will be seen depending on
> driver implementation. For
>  loopback/hairpin packet, metadata set on Rx/Tx may or may not be
> propagated to
>  the other path depending on HW capability.
> 
> +In hairpin case with TX explicit flow mode, metadata could (not mandatory) be
> +used to connect the RX and TX flows if it can be propagated from RX to TX
> path.
> +
>  .. _table_rte_flow_action_set_meta:
> 
>  .. table:: SET_META
> diff --git a/doc/guides/rel_notes/release_20_11.rst
> b/doc/guides/rel_notes/release_20_11.rst
> index 0b2a370..05ceea0 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -109,6 +109,10 @@ New Features
>    * Extern objects and functions can be plugged into the pipeline.
>    * Transaction-oriented table updates.
> 
> +* **Updated the ethdev library to support hairpin between two ports.**
> +
> +  New APIs are introduced to support binding / unbinding 2 ports hairpin.
> +  Hairpin TX part flow rules can be inserted explicitly.
> 
>  Removed Items
>  -------------
> @@ -240,6 +244,10 @@ ABI Changes
> 
>    * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
> 
> +  * ``struct rte_eth_hairpin_conf`` has two new members:
> +
> +    * ``uint32_t tx_explicit:1;``
> +    * ``uint32_t manual_bind:1;``
> 
>  Known Issues
>  ------------
> diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> b/doc/guides/testpmd_app_ug/run_app.rst
> index e2539f6..4e627c4 100644
> --- a/doc/guides/testpmd_app_ug/run_app.rst
> +++ b/doc/guides/testpmd_app_ug/run_app.rst
> @@ -497,3 +497,11 @@ The command line options are:
>  *   ``--record-burst-stats``
> 
>      Enable display of RX and TX burst stats.
> +
> +*   ``--hairpin-mode=0xXX``
> +
> +    Set the hairpin port mode with bitmask, only valid when hairpin queues
> number is set.
> +    bit 4 - explicit TX flow rule
> +    bit 1 - two hairpin ports paired
> +    bit 0 - two hairpin ports loop
> +    The default value is 0. Hairpin will use single port mode and implicit TX flow
> mode.
> --
> 1.8.3.1

Acked-by: Ori Kam <orika@nvidia.com>
Thanks,
Ori

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error packets
  2020-10-08  8:53  0%               ` Nipun Gupta
@ 2020-10-08  8:55  0%                 ` Jerin Jacob
  2020-10-08 15:13  0%                   ` Asaf Penso
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-08  8:55 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: Stephen Hemminger, dpdk-dev, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko, Hemant Agrawal, Sachin Saxena, Rohit Raj

On Thu, Oct 8, 2020 at 2:23 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Tuesday, October 6, 2020 6:44 PM
> > To: Nipun Gupta <nipun.gupta@nxp.com>
> > Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
> > <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> > <ferruh.yigit@intel.com>; Andrew Rybchenko <arybchenko@solarflare.com>;
> > Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> > <sachin.saxena@nxp.com>; Rohit Raj <rohit.raj@nxp.com>
> > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> > packets
> >
> > On Tue, Oct 6, 2020 at 6:40 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Tuesday, October 6, 2020 5:31 PM
> > > > To: Nipun Gupta <nipun.gupta@nxp.com>
> > > > Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
> > > > <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> > > > <ferruh.yigit@intel.com>; Andrew Rybchenko
> > <arybchenko@solarflare.com>;
> > > > Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> > > > <sachin.saxena@nxp.com>; Rohit Raj <rohit.raj@nxp.com>
> > > > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> > > > packets
> > > >
> > > > On Tue, Oct 6, 2020 at 4:07 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > Sent: Monday, October 5, 2020 9:40 PM
> > > > > > To: Stephen Hemminger <stephen@networkplumber.org>
> > > > > > Cc: Nipun Gupta <nipun.gupta@nxp.com>; dpdk-dev <dev@dpdk.org>;
> > > > Thomas
> > > > > > Monjalon <thomas@monjalon.net>; Ferruh Yigit
> > <ferruh.yigit@intel.com>;
> > > > > > Andrew Rybchenko <arybchenko@solarflare.com>; Hemant Agrawal
> > > > > > <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>;
> > > > Rohit
> > > > > > Raj <rohit.raj@nxp.com>
> > > > > > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop
> > error
> > > > > > packets
> > > > > >
> > > > > > On Mon, Oct 5, 2020 at 9:05 PM Stephen Hemminger
> > > > > > <stephen@networkplumber.org> wrote:
> > > > > > >
> > > > > > > On Mon,  5 Oct 2020 12:45:04 +0530
> > > > > > > nipun.gupta@nxp.com wrote:
> > > > > > >
> > > > > > > > From: Nipun Gupta <nipun.gupta@nxp.com>
> > > > > > > >
> > > > > > > > This change adds a RX offload capability, which once enabled,
> > > > > > > > hardware will drop the packets in case there of any error in
> > > > > > > > the packet such as L3 checksum error or L4 checksum.
> > > > > >
> > > > > > IMO, Providing additional support up to the level to choose the errors
> > > > > > to drops give more control to the application. Meaning,
> > > > > > L1 errors such as FCS error
> > > > > > L2 errors ..
> > > > > > L3 errors such checksum
> > > > > > i.e ethdev spec need to have  error level supported by PMD and the
> > > > > > application can set the layers interested to drop.
> > > > >
> > > > > Agree, but 'DEV_RX_OFFLOAD_ERR_PKT_DROP' shall also be there to drop
> > all
> > > > the
> > > > > error packets? Maybe we can rename it to
> > > > DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP.
> > > >
> > > > IMHO,  we introduce such shortcut for a single flag for all err drop
> > > > then we can not change the scheme
> > > > without an API/ABI break.
> > >
> > > Are the following offloads fine:
> > >         DEV_RX_OFFLOAD_L1_FCS_ERR_PKT_DROP
> > >         DEV_RX_OFFLOAD_L3_CSUM_ERR_PKT_DROP
> > >         DEV_RX_OFFLOAD_L4_CSUM_ERR_PKT_DROP
> > >         DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP
> > >
> > > Please let me know in case I need to add any other too.
> >
> > I think, single offload flags and some config/capability structure to
> > define the additional
> > layer selection would be good, instead of adding a lot of new offload flags.
>
>
> +/**
> + * A structure used to enable/disable error packet drop on Rx.
> + */
> +struct rte_rx_err_pkt_drop_conf {
> +       /** enable/disable all RX error packet drop.
> +        * 0 (default) - disable, 1 enable
> +        */
> +       uint32_t all:1;
> +};
> +
>  /**
>   * A structure used to configure an Ethernet port.
>   * Depending upon the RX multi-queue mode, extra advanced
> @@ -1236,6 +1246,8 @@ struct rte_eth_conf {
>         uint32_t dcb_capability_en;
>         struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
>         struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
> +       struct rte_rx_err_pkt_drop_conf err_pkt_drop_conf;
> +       /**< RX error packet drop configuration. */
>
> Is this the kind of changes you are talking about?


Yes.

>
> Also, more changes will be there in 'struct rte_eth_dev_info' structure, defining
> additional separate capability something like 'uint64_t rx_err_drop_offload_capa'.
>
> Regards,
> Nipun
>
> >
> >
> > > Ill send a v3.
> > >
> > > Thanks,
> > > Nipun
> > >
> > > >
> > > > >
> > > > > Currently we have not planned to add separate knobs for separate error in
> > > > > the driver, maybe we can define them separately, or we need have them in
> > > > > this series itself?
> > > >
> > > > I think, ethdev API can have the capability on what are levels it
> > > > supported, in your
> > > > driver case, you can express the same.
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > > >
> > > > > > > > Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> > > > > > > > Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
> > > > > > > > ---
> > > > > > > > These patches are based over series:
> > > > > > > >
> > > > > >
> > > >
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwo
> > > > > >
> > > >
> > rk.dpdk.org%2Fpatch%2F78630%2F&amp;data=02%7C01%7Cnipun.gupta%40nx
> > > > > >
> > > >
> > p.com%7C90b516fd465c48945e7008d869492b3e%7C686ea1d3bc2b4c6fa92cd9
> > > > > >
> > > >
> > 9c5c301635%7C0%7C0%7C637375110263097933&amp;sdata=RBQswMBsfpM6
> > > > > > nyKur%2FaHvOMvNK7RU%2BRyhHt%2FXBsP1OM%3D&amp;reserved=0
> > > > > > > >
> > > > > > > > Changes in v2:
> > > > > > > >  - Add support in DPAA1 driver (patch 2/3)
> > > > > > > >  - Add support and config parameter in testpmd (patch 3/3)
> > > > > > > >
> > > > > > > >  lib/librte_ethdev/rte_ethdev.h | 1 +
> > > > > > > >  1 file changed, 1 insertion(+)
> > > > > > >
> > > > > > > Maybe this should be an rte_flow match/action which would then make
> > it
> > > > > > > more flexible?
> > > > > >
> > > > > > I think, it is not based on any Patten matching. So IMO, it should be best
> > if it
> > > > > > is part of RX offload.
> > > > > >
> > > > > > >
> > > > > > > There is not much of a performance gain for this in real life and
> > > > > > > if only one driver supports it then I am not convinced this is needed.
> > > > > >
> > > > > > Marvell HW has this feature.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error packets
  2020-10-06 13:13  0%             ` Jerin Jacob
@ 2020-10-08  8:53  0%               ` Nipun Gupta
  2020-10-08  8:55  0%                 ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Nipun Gupta @ 2020-10-08  8:53 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Stephen Hemminger, dpdk-dev, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko, Hemant Agrawal, Sachin Saxena, Rohit Raj



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, October 6, 2020 6:44 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
> <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Andrew Rybchenko <arybchenko@solarflare.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> <sachin.saxena@nxp.com>; Rohit Raj <rohit.raj@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> packets
> 
> On Tue, Oct 6, 2020 at 6:40 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Tuesday, October 6, 2020 5:31 PM
> > > To: Nipun Gupta <nipun.gupta@nxp.com>
> > > Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
> > > <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> > > <ferruh.yigit@intel.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>;
> > > Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> > > <sachin.saxena@nxp.com>; Rohit Raj <rohit.raj@nxp.com>
> > > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> > > packets
> > >
> > > On Tue, Oct 6, 2020 at 4:07 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Monday, October 5, 2020 9:40 PM
> > > > > To: Stephen Hemminger <stephen@networkplumber.org>
> > > > > Cc: Nipun Gupta <nipun.gupta@nxp.com>; dpdk-dev <dev@dpdk.org>;
> > > Thomas
> > > > > Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>;
> > > > > Andrew Rybchenko <arybchenko@solarflare.com>; Hemant Agrawal
> > > > > <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>;
> > > Rohit
> > > > > Raj <rohit.raj@nxp.com>
> > > > > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop
> error
> > > > > packets
> > > > >
> > > > > On Mon, Oct 5, 2020 at 9:05 PM Stephen Hemminger
> > > > > <stephen@networkplumber.org> wrote:
> > > > > >
> > > > > > On Mon,  5 Oct 2020 12:45:04 +0530
> > > > > > nipun.gupta@nxp.com wrote:
> > > > > >
> > > > > > > From: Nipun Gupta <nipun.gupta@nxp.com>
> > > > > > >
> > > > > > > This change adds a RX offload capability, which once enabled,
> > > > > > > hardware will drop the packets in case there of any error in
> > > > > > > the packet such as L3 checksum error or L4 checksum.
> > > > >
> > > > > IMO, Providing additional support up to the level to choose the errors
> > > > > to drops give more control to the application. Meaning,
> > > > > L1 errors such as FCS error
> > > > > L2 errors ..
> > > > > L3 errors such checksum
> > > > > i.e ethdev spec need to have  error level supported by PMD and the
> > > > > application can set the layers interested to drop.
> > > >
> > > > Agree, but 'DEV_RX_OFFLOAD_ERR_PKT_DROP' shall also be there to drop
> all
> > > the
> > > > error packets? Maybe we can rename it to
> > > DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP.
> > >
> > > IMHO,  we introduce such shortcut for a single flag for all err drop
> > > then we can not change the scheme
> > > without an API/ABI break.
> >
> > Are the following offloads fine:
> >         DEV_RX_OFFLOAD_L1_FCS_ERR_PKT_DROP
> >         DEV_RX_OFFLOAD_L3_CSUM_ERR_PKT_DROP
> >         DEV_RX_OFFLOAD_L4_CSUM_ERR_PKT_DROP
> >         DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP
> >
> > Please let me know in case I need to add any other too.
> 
> I think, single offload flags and some config/capability structure to
> define the additional
> layer selection would be good, instead of adding a lot of new offload flags.


+/**
+ * A structure used to enable/disable error packet drop on Rx.
+ */
+struct rte_rx_err_pkt_drop_conf {
+       /** enable/disable all RX error packet drop.
+        * 0 (default) - disable, 1 enable
+        */
+       uint32_t all:1;
+};
+
 /**
  * A structure used to configure an Ethernet port.
  * Depending upon the RX multi-queue mode, extra advanced
@@ -1236,6 +1246,8 @@ struct rte_eth_conf {
        uint32_t dcb_capability_en;
        struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
        struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+       struct rte_rx_err_pkt_drop_conf err_pkt_drop_conf;
+       /**< RX error packet drop configuration. */

Is this the kind of changes you are talking about?

Also, more changes will be there in 'struct rte_eth_dev_info' structure, defining
additional separate capability something like 'uint64_t rx_err_drop_offload_capa'.

Regards,
Nipun

> 
> 
> > Ill send a v3.
> >
> > Thanks,
> > Nipun
> >
> > >
> > > >
> > > > Currently we have not planned to add separate knobs for separate error in
> > > > the driver, maybe we can define them separately, or we need have them in
> > > > this series itself?
> > >
> > > I think, ethdev API can have the capability on what are levels it
> > > supported, in your
> > > driver case, you can express the same.
> > >
> > >
> > > >
> > > > >
> > > > > > >
> > > > > > > Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> > > > > > > Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
> > > > > > > ---
> > > > > > > These patches are based over series:
> > > > > > >
> > > > >
> > >
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwo
> > > > >
> > >
> rk.dpdk.org%2Fpatch%2F78630%2F&amp;data=02%7C01%7Cnipun.gupta%40nx
> > > > >
> > >
> p.com%7C90b516fd465c48945e7008d869492b3e%7C686ea1d3bc2b4c6fa92cd9
> > > > >
> > >
> 9c5c301635%7C0%7C0%7C637375110263097933&amp;sdata=RBQswMBsfpM6
> > > > > nyKur%2FaHvOMvNK7RU%2BRyhHt%2FXBsP1OM%3D&amp;reserved=0
> > > > > > >
> > > > > > > Changes in v2:
> > > > > > >  - Add support in DPAA1 driver (patch 2/3)
> > > > > > >  - Add support and config parameter in testpmd (patch 3/3)
> > > > > > >
> > > > > > >  lib/librte_ethdev/rte_ethdev.h | 1 +
> > > > > > >  1 file changed, 1 insertion(+)
> > > > > >
> > > > > > Maybe this should be an rte_flow match/action which would then make
> it
> > > > > > more flexible?
> > > > >
> > > > > I think, it is not based on any Patten matching. So IMO, it should be best
> if it
> > > > > is part of RX offload.
> > > > >
> > > > > >
> > > > > > There is not much of a performance gain for this in real life and
> > > > > > if only one driver supports it then I am not convinced this is needed.
> > > > >
> > > > > Marvell HW has this feature.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 6/6] doc: update for two ports hairpin mode
  @ 2020-10-08  8:51  5%   ` Bing Zhao
  2020-10-08  9:47  0%     ` Ori Kam
    1 sibling, 1 reply; 200+ results
From: Bing Zhao @ 2020-10-08  8:51 UTC (permalink / raw)
  To: thomas, orika, ferruh.yigit, arybchenko, mdr, nhorman,
	bernard.iremonger, beilei.xing, wenzhuo.lu
  Cc: dev

In the release notes, 2 ports hairpin mode feature is added.

In rte flow part, one suggestion is added to mention that metadata
could be used to connect the hairpin RX and TX flows if the hairpin
is working in explicit TX flow rule mode.

In the testpmd command line, the new parameter to set hairpin working
mode is described.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 3 +++
 doc/guides/rel_notes/release_20_11.rst | 8 ++++++++
 doc/guides/testpmd_app_ug/run_app.rst  | 8 ++++++++
 3 files changed, 19 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 119b128..bb54d67 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2592,6 +2592,9 @@ set, unpredictable value will be seen depending on driver implementation. For
 loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to
 the other path depending on HW capability.
 
+In hairpin case with TX explicit flow mode, metadata could (not mandatory) be
+used to connect the RX and TX flows if it can be propagated from RX to TX path.
+
 .. _table_rte_flow_action_set_meta:
 
 .. table:: SET_META
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0b2a370..05ceea0 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -109,6 +109,10 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Updated the ethdev library to support hairpin between two ports.**
+
+  New APIs are introduced to support binding / unbinding 2 ports hairpin.
+  Hairpin TX part flow rules can be inserted explicitly.
 
 Removed Items
 -------------
@@ -240,6 +244,10 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * ``struct rte_eth_hairpin_conf`` has two new members:
+
+    * ``uint32_t tx_explicit:1;``
+    * ``uint32_t manual_bind:1;``
 
 Known Issues
 ------------
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index e2539f6..4e627c4 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -497,3 +497,11 @@ The command line options are:
 *   ``--record-burst-stats``
 
     Enable display of RX and TX burst stats.
+
+*   ``--hairpin-mode=0xXX``
+
+    Set the hairpin port mode with bitmask, only valid when hairpin queues number is set.
+    bit 4 - explicit TX flow rule
+    bit 1 - two hairpin ports paired
+    bit 0 - two hairpin ports loop
+    The default value is 0. Hairpin will use single port mode and implicit TX flow mode.
-- 
1.8.3.1


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v2 1/1] cryptodev: remove v20 ABI compatibility
  2020-10-08  8:32  9% ` [dpdk-dev] [PATCH v2 0/1] cryptodev: remove v20 ABI compatibility Adam Dybkowski
@ 2020-10-08  8:32 14%   ` Adam Dybkowski
  2020-10-09 17:41  4%     ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Adam Dybkowski @ 2020-10-08  8:32 UTC (permalink / raw)
  To: dev, akhil.goyal
  Cc: fiona.trahe, david.marchand, declan.doherty, Adam Dybkowski,
	Arek Kusztal

This reverts commit a0f0de06d457753c94688d551a6e8659b4d4e041 as the
rte_cryptodev_info_get function versioning was a temporary solution
to maintain ABI compatibility for ChaCha20-Poly1305 and is not
needed in 20.11.

Fixes: a0f0de06d457 ("cryptodev: fix ABI compatibility for ChaCha20-Poly1305")

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Reviewed-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
---
 lib/librte_cryptodev/meson.build              |   1 -
 lib/librte_cryptodev/rte_cryptodev.c          | 150 +-----------------
 lib/librte_cryptodev/rte_cryptodev.h          |  34 +---
 .../rte_cryptodev_version.map                 |   6 -
 4 files changed, 5 insertions(+), 186 deletions(-)

diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build
index df1144058..c4c6b3b6a 100644
--- a/lib/librte_cryptodev/meson.build
+++ b/lib/librte_cryptodev/meson.build
@@ -1,7 +1,6 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017-2019 Intel Corporation
 
-use_function_versioning = true
 sources = files('rte_cryptodev.c', 'rte_cryptodev_pmd.c', 'cryptodev_trace_points.c')
 headers = files('rte_cryptodev.h',
 	'rte_cryptodev_pmd.h',
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..cda160f61 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -36,8 +36,6 @@
 #include <rte_errno.h>
 #include <rte_spinlock.h>
 #include <rte_string_fns.h>
-#include <rte_compat.h>
-#include <rte_function_versioning.h>
 
 #include "rte_crypto.h"
 #include "rte_cryptodev.h"
@@ -59,15 +57,6 @@ static struct rte_cryptodev_global cryptodev_globals = {
 /* spinlock for crypto device callbacks */
 static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
-static const struct rte_cryptodev_capabilities
-		cryptodev_undefined_capabilities[] = {
-		RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static struct rte_cryptodev_capabilities
-		*capability_copy[RTE_CRYPTO_MAX_DEVS];
-static uint8_t is_capability_checked[RTE_CRYPTO_MAX_DEVS];
-
 /**
  * The user application callback description.
  *
@@ -291,43 +280,8 @@ rte_crypto_auth_operation_strings[] = {
 		[RTE_CRYPTO_AUTH_OP_GENERATE]	= "generate"
 };
 
-const struct rte_cryptodev_symmetric_capability __vsym *
-rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
-		const struct rte_cryptodev_sym_capability_idx *idx)
-{
-	const struct rte_cryptodev_capabilities *capability;
-	struct rte_cryptodev_info dev_info;
-	int i = 0;
-
-	rte_cryptodev_info_get_v20(dev_id, &dev_info);
-
-	while ((capability = &dev_info.capabilities[i++])->op !=
-			RTE_CRYPTO_OP_TYPE_UNDEFINED) {
-		if (capability->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
-			continue;
-
-		if (capability->sym.xform_type != idx->type)
-			continue;
-
-		if (idx->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
-			capability->sym.auth.algo == idx->algo.auth)
-			return &capability->sym;
-
-		if (idx->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
-			capability->sym.cipher.algo == idx->algo.cipher)
-			return &capability->sym;
-
-		if (idx->type == RTE_CRYPTO_SYM_XFORM_AEAD &&
-				capability->sym.aead.algo == idx->algo.aead)
-			return &capability->sym;
-	}
-
-	return NULL;
-}
-VERSION_SYMBOL(rte_cryptodev_sym_capability_get, _v20, 20.0);
-
-const struct rte_cryptodev_symmetric_capability __vsym *
-rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
+const struct rte_cryptodev_symmetric_capability *
+rte_cryptodev_sym_capability_get(uint8_t dev_id,
 		const struct rte_cryptodev_sym_capability_idx *idx)
 {
 	const struct rte_cryptodev_capabilities *capability;
@@ -359,11 +313,6 @@ rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
 
 	return NULL;
 }
-MAP_STATIC_SYMBOL(const struct rte_cryptodev_symmetric_capability *
-		rte_cryptodev_sym_capability_get(uint8_t dev_id,
-		const struct rte_cryptodev_sym_capability_idx *idx),
-		rte_cryptodev_sym_capability_get_v21);
-BIND_DEFAULT_SYMBOL(rte_cryptodev_sym_capability_get, _v21, 21);
 
 static int
 param_range_check(uint16_t size, const struct rte_crypto_param_range *range)
@@ -1085,12 +1034,6 @@ rte_cryptodev_close(uint8_t dev_id)
 	retval = (*dev->dev_ops->dev_close)(dev);
 	rte_cryptodev_trace_close(dev_id, retval);
 
-	if (capability_copy[dev_id]) {
-		free(capability_copy[dev_id]);
-		capability_copy[dev_id] = NULL;
-	}
-	is_capability_checked[dev_id] = 0;
-
 	if (retval < 0)
 		return retval;
 
@@ -1233,89 +1176,8 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
 	(*dev->dev_ops->stats_reset)(dev);
 }
 
-static void
-get_v20_capabilities(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
-{
-	const struct rte_cryptodev_capabilities *capability;
-	uint8_t found_invalid_capa = 0;
-	uint8_t counter = 0;
-
-	for (capability = dev_info->capabilities;
-			capability->op != RTE_CRYPTO_OP_TYPE_UNDEFINED;
-			++capability, ++counter) {
-		if (capability->op == RTE_CRYPTO_OP_TYPE_SYMMETRIC &&
-				capability->sym.xform_type ==
-					RTE_CRYPTO_SYM_XFORM_AEAD
-				&& capability->sym.aead.algo >=
-				RTE_CRYPTO_AEAD_CHACHA20_POLY1305) {
-			found_invalid_capa = 1;
-			counter--;
-		}
-	}
-	is_capability_checked[dev_id] = 1;
-	if (!found_invalid_capa)
-		return;
-	capability_copy[dev_id] = malloc(counter *
-		sizeof(struct rte_cryptodev_capabilities));
-	if (capability_copy[dev_id] == NULL) {
-		 /*
-		  * error case - no memory to store the trimmed
-		  * list, so have to return an empty list
-		  */
-		dev_info->capabilities =
-			cryptodev_undefined_capabilities;
-		is_capability_checked[dev_id] = 0;
-	} else {
-		counter = 0;
-		for (capability = dev_info->capabilities;
-				capability->op !=
-				RTE_CRYPTO_OP_TYPE_UNDEFINED;
-				capability++) {
-			if (!(capability->op ==
-				RTE_CRYPTO_OP_TYPE_SYMMETRIC
-				&& capability->sym.xform_type ==
-				RTE_CRYPTO_SYM_XFORM_AEAD
-				&& capability->sym.aead.algo >=
-				RTE_CRYPTO_AEAD_CHACHA20_POLY1305)) {
-				capability_copy[dev_id][counter++] =
-						*capability;
-			}
-		}
-		dev_info->capabilities =
-				capability_copy[dev_id];
-	}
-}
-
-void __vsym
-rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
-{
-	struct rte_cryptodev *dev;
-
-	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
-		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
-		return;
-	}
-
-	dev = &rte_crypto_devices[dev_id];
-
-	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
-
-	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
-	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
-
-	if (capability_copy[dev_id] == NULL) {
-		if (!is_capability_checked[dev_id])
-			get_v20_capabilities(dev_id, dev_info);
-	} else
-		dev_info->capabilities = capability_copy[dev_id];
-
-	dev_info->driver_name = dev->device->driver->name;
-	dev_info->device = dev->device;
-}
-VERSION_SYMBOL(rte_cryptodev_info_get, _v20, 20.0);
-
-void __vsym
-rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
 {
 	struct rte_cryptodev *dev;
 
@@ -1334,9 +1196,6 @@ rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
 	dev_info->driver_name = dev->device->driver->name;
 	dev_info->device = dev->device;
 }
-MAP_STATIC_SYMBOL(void rte_cryptodev_info_get(uint8_t dev_id,
-	struct rte_cryptodev_info *dev_info), rte_cryptodev_info_get_v21);
-BIND_DEFAULT_SYMBOL(rte_cryptodev_info_get, _v21, 21);
 
 int
 rte_cryptodev_callback_register(uint8_t dev_id,
@@ -1449,7 +1308,6 @@ rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
 	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
 }
 
-
 int
 rte_cryptodev_sym_session_init(uint8_t dev_id,
 		struct rte_cryptodev_sym_session *sess,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..26abd0c52 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -219,14 +219,6 @@ struct rte_cryptodev_asym_capability_idx {
  *   - Return NULL if the capability not exist.
  */
 const struct rte_cryptodev_symmetric_capability *
-rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
-		const struct rte_cryptodev_sym_capability_idx *idx);
-
-const struct rte_cryptodev_symmetric_capability *
-rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
-		const struct rte_cryptodev_sym_capability_idx *idx);
-
-const struct rte_cryptodev_symmetric_capability *
 rte_cryptodev_sym_capability_get(uint8_t dev_id,
 		const struct rte_cryptodev_sym_capability_idx *idx);
 
@@ -789,33 +781,9 @@ rte_cryptodev_stats_reset(uint8_t dev_id);
  * the last valid element has it's op field set to
  * RTE_CRYPTO_OP_TYPE_UNDEFINED.
  */
-
-void
+extern void
 rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
 
-/* An extra element RTE_CRYPTO_AEAD_CHACHA20_POLY1305 is added
- * to enum rte_crypto_aead_algorithm, also changing the value of
- *  RTE_CRYPTO_AEAD_LIST_END. To maintain ABI compatibility with applications
- * which linked against earlier versions, preventing them, for example, from
- * picking up the new value and using it to index into an array sized too small
- * for it, it is necessary to have two versions of rte_cryptodev_info_get()
- * The latest version just returns directly the capabilities retrieved from
- * the device. The compatible version inspects the capabilities retrieved
- * from the device, but only returns them directly if the new value
- * is not included. If the new value is included, it allocates space
- * for a copy of the device capabilities, trims the new value from this
- * and returns this copy. It only needs to do this once per device.
- * For the corner case of a corner case when the alloc may fail,
- * an empty capability list is returned, as there is no mechanism to return
- * an error and adding such a mechanism would itself be an ABI breakage.
- * The compatible version can be removed after the next major ABI release.
- */
-
-void
-rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
-
-void
-rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
 
 /**
  * Register a callback function for specific device id.
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..7727286ac 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -58,12 +58,6 @@ DPDK_21 {
 	local: *;
 };
 
-DPDK_20.0 {
-	global:
-	rte_cryptodev_info_get;
-	rte_cryptodev_sym_capability_get;
-};
-
 EXPERIMENTAL {
 	global:
 
-- 
2.25.1


^ permalink raw reply	[relevance 14%]

* [dpdk-dev] [PATCH v2 0/1] cryptodev: remove v20 ABI compatibility
    2020-10-06 12:32  9% ` David Marchand
@ 2020-10-08  8:32  9% ` Adam Dybkowski
  2020-10-08  8:32 14%   ` [dpdk-dev] [PATCH v2 1/1] " Adam Dybkowski
  1 sibling, 1 reply; 200+ results
From: Adam Dybkowski @ 2020-10-08  8:32 UTC (permalink / raw)
  To: dev, akhil.goyal
  Cc: fiona.trahe, david.marchand, declan.doherty, Adam Dybkowski

This reverts commit cryptodev: fix ABI compatibility for ChaCha20-Poly1305 as the
rte_cryptodev_info_get function versioning was a temporary solution
to maintain ABI compatibility for ChaCha20-Poly1305 and is not
needed in 20.11.

Adam Dybkowski (1):
  cryptodev: remove v20 ABI compatibility

--
v2:
* minor styling issues corrected (removed empty lines)

--
 lib/librte_cryptodev/meson.build              |   1 -
 lib/librte_cryptodev/rte_cryptodev.c          | 150 +-----------------
 lib/librte_cryptodev/rte_cryptodev.h          |  34 +---
 .../rte_cryptodev_version.map                 |   6 -
 4 files changed, 5 insertions(+), 186 deletions(-)

-- 
2.25.1


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v2] eal: simplify exit functions
  @ 2020-10-08  7:51  0%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-10-08  7:51 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, techboard, Bruce Richardson, Ray Kinsella, Neil Horman

On Mon, Sep 28, 2020 at 2:01 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> The option RTE_EAL_ALWAYS_PANIC_ON_ERROR was off by default,
> and not customizable with meson. It is completely removed.
>
> The function rte_dump_registers is a trace of the bare metal support
> era, and was not supported in userland. It is completely removed.
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> The deprecation notice for this removal has been missed.
> I assume it would not hurt anybody to remove this useless function
> from DPDK 20.11. Asking the Technical Board for confirmation.
> ---
>  app/test/test_debug.c                    |  3 ---
>  doc/guides/howto/debug_troubleshoot.rst  |  2 +-
>  doc/guides/rel_notes/release_20_11.rst   |  2 ++
>  lib/librte_eal/common/eal_common_debug.c | 17 +----------------
>  lib/librte_eal/include/rte_debug.h       |  7 -------
>  lib/librte_eal/rte_eal_version.map       |  1 -
>  6 files changed, 4 insertions(+), 28 deletions(-)
>
> diff --git a/app/test/test_debug.c b/app/test/test_debug.c
> index 25eab97e2a..834a7386f5 100644
> --- a/app/test/test_debug.c
> +++ b/app/test/test_debug.c
> @@ -66,13 +66,11 @@ test_exit_val(int exit_val)
>         }
>         wait(&status);
>         printf("Child process status: %d\n", status);
> -#ifndef RTE_EAL_ALWAYS_PANIC_ON_ERROR
>         if(!WIFEXITED(status) || WEXITSTATUS(status) != (uint8_t)exit_val){
>                 printf("Child process terminated with incorrect status (expected = %d)!\n",
>                                 exit_val);
>                 return -1;
>         }
> -#endif
>         return 0;
>  }
>
> @@ -113,7 +111,6 @@ static int
>  test_debug(void)
>  {
>         rte_dump_stack();
> -       rte_dump_registers();
>         if (test_panic() < 0)
>                 return -1;
>         if (test_exit() < 0)
> diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
> index 5a46f5fba3..50bd32a8ef 100644
> --- a/doc/guides/howto/debug_troubleshoot.rst
> +++ b/doc/guides/howto/debug_troubleshoot.rst
> @@ -314,7 +314,7 @@ Custom worker function :numref:`dtg_distributor_worker`.
>     * For high-performance execution logic ensure running it on correct NUMA
>       and non-master core.
>
> -   * Analyze run logic with ``rte_dump_stack``, ``rte_dump_registers`` and
> +   * Analyze run logic with ``rte_dump_stack`` and
>       ``rte_memdump`` for more insights.
>
>     * Make use of objdump to ensure opcode is matching to the desired state.
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index f377ab8e87..c0b83e9554 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -184,6 +184,8 @@ ABI Changes
>     Also, make sure to start the actual text at the margin.
>     =======================================================
>
> +* eal: Removed the not implemented function ``rte_dump_registers()``.
> +
>  * ``ethdev`` changes
>
>    * Following device operation function pointers moved
> diff --git a/lib/librte_eal/common/eal_common_debug.c b/lib/librte_eal/common/eal_common_debug.c
> index 722468754d..15418e957f 100644
> --- a/lib/librte_eal/common/eal_common_debug.c
> +++ b/lib/librte_eal/common/eal_common_debug.c
> @@ -7,14 +7,6 @@
>  #include <rte_log.h>
>  #include <rte_debug.h>
>
> -/* not implemented */
> -void
> -rte_dump_registers(void)
> -{
> -       return;
> -}
> -
> -/* call abort(), it will generate a coredump if enabled */
>  void
>  __rte_panic(const char *funcname, const char *format, ...)
>  {
> @@ -25,8 +17,7 @@ __rte_panic(const char *funcname, const char *format, ...)
>         rte_vlog(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, format, ap);
>         va_end(ap);
>         rte_dump_stack();
> -       rte_dump_registers();
> -       abort();
> +       abort(); /* generate a coredump if enabled */
>  }
>
>  /*
> @@ -46,14 +37,8 @@ rte_exit(int exit_code, const char *format, ...)
>         rte_vlog(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, format, ap);
>         va_end(ap);
>
> -#ifndef RTE_EAL_ALWAYS_PANIC_ON_ERROR
>         if (rte_eal_cleanup() != 0)
>                 RTE_LOG(CRIT, EAL,
>                         "EAL could not release all resources\n");
>         exit(exit_code);
> -#else
> -       rte_dump_stack();
> -       rte_dump_registers();
> -       abort();
> -#endif
>  }
> diff --git a/lib/librte_eal/include/rte_debug.h b/lib/librte_eal/include/rte_debug.h
> index 50052c5a90..c4bc71ce28 100644
> --- a/lib/librte_eal/include/rte_debug.h
> +++ b/lib/librte_eal/include/rte_debug.h
> @@ -26,13 +26,6 @@ extern "C" {
>   */
>  void rte_dump_stack(void);
>
> -/**
> - * Dump the registers of the calling core to the console.
> - *
> - * Note: Not implemented in a userapp environment; use gdb instead.
> - */
> -void rte_dump_registers(void);
> -
>  /**
>   * Provide notification of a critical non-recoverable error and terminate
>   * execution abnormally.
> diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
> index c32461c663..cd1a90b95f 100644
> --- a/lib/librte_eal/rte_eal_version.map
> +++ b/lib/librte_eal/rte_eal_version.map
> @@ -38,7 +38,6 @@ DPDK_21 {
>         rte_devargs_remove;
>         rte_devargs_type_count;
>         rte_dump_physmem_layout;
> -       rte_dump_registers;
>         rte_dump_stack;
>         rte_dump_tailq;
>         rte_eal_alarm_cancel;
> --
> 2.28.0
>

Acked-by: David Marchand <david.marchand@redhat.com>


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
  2020-10-07 16:45  2%   ` [dpdk-dev] [PATCH v4 " Vikas Gupta
@ 2020-10-07 17:18  2%     ` Vikas Gupta
    2020-10-09 15:00  0%       ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
  0 siblings, 2 replies; 200+ results
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
  To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta

Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit. 
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.

The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework. 

The patchset has been tested on the above mentioned SoCs.

Regards,
Vikas

Changes from v0->v1: 
      Updated the ABI version in file .../crypto/bcmfs/rte_pmd_bcmfs_version.map

Changes from v1->v2:
	- Fix compilation errors and coding style warnings.
	- Use global test crypto suite suggested by Adam Dybkowski

Changes from v2->v3:
	- Release notes updated.
	- bcmfs.rst updated with missing information about installation.
	- Review comments from patch1 from v2 addressed.
	- Updated description about dependency of PMD driver on VFIO_PRESENT.
	- Fixed typo in bcmfs_hw_defs.h (comments on patch3 from v2 addressed)
	- Comments on patch6 from v2 addressed and capability list is fixed.
		Removed redundant enums and macros from the file
		bcmfs_sym_defs.h and updated other impacted APIs accordingly.
		patch7 too is updated due to removal of redundancy.
	  Thanks! to Akhil for pointing out the redundancy.
	- Fix minor code style issues in few files as part of review.

Changes from v3->v4:
	- Code style issues fixed.
	- Change of barrier API in bcmfs4_rm.c and bcmfs5_rm.c

Changes from v4->v5:
	- Change of barrier API in bcmfs4_rm.c. Missed one in v4


Vikas Gupta (8):
  crypto/bcmfs: add BCMFS driver
  crypto/bcmfs: add vfio support
  crypto/bcmfs: add queue pair management API
  crypto/bcmfs: add HW queue pair operations
  crypto/bcmfs: create a symmetric cryptodev
  crypto/bcmfs: add session handling and capabilities
  crypto/bcmfs: add crypto HW module
  crypto/bcmfs: add crypto pmd into cryptodev test

 MAINTAINERS                                   |    7 +
 app/test/test_cryptodev.c                     |   17 +
 app/test/test_cryptodev.h                     |    1 +
 doc/guides/cryptodevs/bcmfs.rst               |  109 ++
 doc/guides/cryptodevs/features/bcmfs.ini      |   56 +
 doc/guides/cryptodevs/index.rst               |    1 +
 doc/guides/rel_notes/release_20_11.rst        |    5 +
 drivers/crypto/bcmfs/bcmfs_dev_msg.h          |   29 +
 drivers/crypto/bcmfs/bcmfs_device.c           |  332 +++++
 drivers/crypto/bcmfs/bcmfs_device.h           |   76 ++
 drivers/crypto/bcmfs/bcmfs_hw_defs.h          |   32 +
 drivers/crypto/bcmfs/bcmfs_logs.c             |   38 +
 drivers/crypto/bcmfs/bcmfs_logs.h             |   34 +
 drivers/crypto/bcmfs/bcmfs_qp.c               |  383 ++++++
 drivers/crypto/bcmfs/bcmfs_qp.h               |  142 ++
 drivers/crypto/bcmfs/bcmfs_sym.c              |  289 +++++
 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c |  764 +++++++++++
 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h |   16 +
 drivers/crypto/bcmfs/bcmfs_sym_defs.h         |   34 +
 drivers/crypto/bcmfs/bcmfs_sym_engine.c       | 1155 +++++++++++++++++
 drivers/crypto/bcmfs/bcmfs_sym_engine.h       |  115 ++
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c          |  426 ++++++
 drivers/crypto/bcmfs/bcmfs_sym_pmd.h          |   38 +
 drivers/crypto/bcmfs/bcmfs_sym_req.h          |   62 +
 drivers/crypto/bcmfs/bcmfs_sym_session.c      |  282 ++++
 drivers/crypto/bcmfs/bcmfs_sym_session.h      |  109 ++
 drivers/crypto/bcmfs/bcmfs_vfio.c             |  107 ++
 drivers/crypto/bcmfs/bcmfs_vfio.h             |   17 +
 drivers/crypto/bcmfs/hw/bcmfs4_rm.c           |  743 +++++++++++
 drivers/crypto/bcmfs/hw/bcmfs5_rm.c           |  677 ++++++++++
 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c     |   82 ++
 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h     |   51 +
 drivers/crypto/bcmfs/meson.build              |   20 +
 .../crypto/bcmfs/rte_pmd_bcmfs_version.map    |    3 +
 drivers/crypto/meson.build                    |    1 +
 35 files changed, 6253 insertions(+)
 create mode 100644 doc/guides/cryptodevs/bcmfs.rst
 create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
 create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
 create mode 100644 drivers/crypto/bcmfs/meson.build
 create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map

-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v5 07/25] raw/ioat: rename functions to be operation-agnostic
  @ 2020-10-07 16:30  3%   ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-10-07 16:30 UTC (permalink / raw)
  To: dev; +Cc: patrick.fu, thomas, Bruce Richardson, Kevin Laatz, Radu Nicolau

Since the hardware supported by the ioat driver is capable of operations
other than just copies, we can rename the doorbell and completion-return
functions to not have "copies" in their names. These functions are not
copy-specific, and so would apply for other operations which may be added
later to the driver.

Also add a suitable warning using deprecation attribute for any code using
the old functions names.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 doc/guides/rawdevs/ioat.rst            | 16 ++++++++--------
 doc/guides/rel_notes/release_20_11.rst |  9 +++++++++
 doc/guides/sample_app_ug/ioat.rst      |  8 ++++----
 drivers/raw/ioat/ioat_rawdev_test.c    | 12 ++++++------
 drivers/raw/ioat/rte_ioat_rawdev.h     | 14 +++++++-------
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 20 ++++++++++++++++----
 examples/ioat/ioatfwd.c                |  4 ++--
 lib/librte_eal/include/rte_common.h    |  1 +
 8 files changed, 53 insertions(+), 31 deletions(-)

diff --git a/doc/guides/rawdevs/ioat.rst b/doc/guides/rawdevs/ioat.rst
index af00d77fb..3db5f5d09 100644
--- a/doc/guides/rawdevs/ioat.rst
+++ b/doc/guides/rawdevs/ioat.rst
@@ -157,9 +157,9 @@ Performing Data Copies
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 To perform data copies using IOAT rawdev devices, the functions
-``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
+``rte_ioat_enqueue_copy()`` and ``rte_ioat_perform_ops()`` should be used.
 Once copies have been completed, the completion will be reported back when
-the application calls ``rte_ioat_completed_copies()``.
+the application calls ``rte_ioat_completed_ops()``.
 
 The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
 device ring for copying at a later point. The parameters to that function
@@ -172,11 +172,11 @@ pointers if packet data is being copied.
 
 While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
 the device ring, the copy will not actually be performed until after the
-application calls the ``rte_ioat_do_copies()`` function. This function
+application calls the ``rte_ioat_perform_ops()`` function. This function
 informs the device hardware of the elements enqueued on the ring, and the
 device will begin to process them. It is expected that, for efficiency
 reasons, a burst of operations will be enqueued to the device via multiple
-enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
+enqueue calls between calls to the ``rte_ioat_perform_ops()`` function.
 
 The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
 a burst of copies to the device and start the hardware processing of them:
@@ -210,10 +210,10 @@ a burst of copies to the device and start the hardware processing of them:
                         return -1;
                 }
         }
-        rte_ioat_do_copies(dev_id);
+        rte_ioat_perform_ops(dev_id);
 
 To retrieve information about completed copies, the API
-``rte_ioat_completed_copies()`` should be used. This API will return to the
+``rte_ioat_completed_ops()`` should be used. This API will return to the
 application a set of completion handles passed in when the relevant copies
 were enqueued.
 
@@ -223,9 +223,9 @@ is correct before freeing the data buffers using the returned handles:
 
 .. code-block:: C
 
-        if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+        if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
                         (void *)completed_dst) != RTE_DIM(srcs)) {
-                printf("Error with rte_ioat_completed_copies\n");
+                printf("Error with rte_ioat_completed_ops\n");
                 return -1;
         }
         for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 1e73c26d4..e7d038f31 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -121,6 +121,11 @@ New Features
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
   * Added a per-device configuration flag to disable management of user-provided completion handles
+  * Renamed the ``rte_ioat_do_copies()`` API to ``rte_ioat_perform_ops()``,
+    and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
+    to better reflect the APIs' purposes, and remove the implication that
+    they are limited to copy operations only.
+    [Note: The old API is still provided but marked as deprecated in the code]
 
 
 Removed Items
@@ -234,6 +239,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* raw/ioat: As noted above, the ``rte_ioat_do_copies()`` and
+  ``rte_ioat_completed_copies()`` functions have been renamed to
+  ``rte_ioat_perform_ops()`` and ``rte_ioat_completed_ops()`` respectively.
+
 
 ABI Changes
 -----------
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index 3f7d5c34a..964160dff 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -394,7 +394,7 @@ packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring:
                 nb_enq = ioat_enqueue_packets(pkts_burst,
                     nb_rx, rx_config->ioat_ids[i]);
                 if (nb_enq > 0)
-                    rte_ioat_do_copies(rx_config->ioat_ids[i]);
+                    rte_ioat_perform_ops(rx_config->ioat_ids[i]);
             } else {
                 /* Perform packet software copy, free source packets */
                 int ret;
@@ -433,7 +433,7 @@ The packets are received in burst mode using ``rte_eth_rx_burst()``
 function. When using hardware copy mode the packets are enqueued in
 copying device's buffer using ``ioat_enqueue_packets()`` which calls
 ``rte_ioat_enqueue_copy()``. When all received packets are in the
-buffer the copy operations are started by calling ``rte_ioat_do_copies()``.
+buffer the copy operations are started by calling ``rte_ioat_perform_ops()``.
 Function ``rte_ioat_enqueue_copy()`` operates on physical address of
 the packet. Structure ``rte_mbuf`` contains only physical address to
 start of the data buffer (``buf_iova``). Thus the address is adjusted
@@ -490,7 +490,7 @@ or indirect mbufs, then multiple copy operations must be used.
 
 
 All completed copies are processed by ``ioat_tx_port()`` function. When using
-hardware copy mode the function invokes ``rte_ioat_completed_copies()``
+hardware copy mode the function invokes ``rte_ioat_completed_ops()``
 on each assigned IOAT channel to gather copied packets. If software copy
 mode is used the function dequeues copied packets from the rte_ring. Then each
 packet MAC address is changed if it was enabled. After that copies are sent
@@ -510,7 +510,7 @@ in burst mode using `` rte_eth_tx_burst()``.
         for (i = 0; i < tx_config->nb_queues; i++) {
             if (copy_mode == COPY_MODE_IOAT_NUM) {
                 /* Deque the mbufs from IOAT device. */
-                nb_dq = rte_ioat_completed_copies(
+                nb_dq = rte_ioat_completed_ops(
                     tx_config->ioat_ids[i], MAX_PKT_BURST,
                     (void *)mbufs_src, (void *)mbufs_dst);
             } else {
diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c
index 77f96bba3..439b46c03 100644
--- a/drivers/raw/ioat/ioat_rawdev_test.c
+++ b/drivers/raw/ioat/ioat_rawdev_test.c
@@ -65,12 +65,12 @@ test_enqueue_copies(int dev_id)
 			PRINT_ERR("Error with rte_ioat_enqueue_copy\n");
 			return -1;
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(10);
 
-		if (rte_ioat_completed_copies(dev_id, 1, (void *)&completed[0],
+		if (rte_ioat_completed_ops(dev_id, 1, (void *)&completed[0],
 				(void *)&completed[1]) != 1) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		if (completed[0] != src || completed[1] != dst) {
@@ -119,12 +119,12 @@ test_enqueue_copies(int dev_id)
 				return -1;
 			}
 		}
-		rte_ioat_do_copies(dev_id);
+		rte_ioat_perform_ops(dev_id);
 		usleep(100);
 
-		if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+		if (rte_ioat_completed_ops(dev_id, 64, (void *)completed_src,
 				(void *)completed_dst) != RTE_DIM(srcs)) {
-			PRINT_ERR("Error with rte_ioat_completed_copies\n");
+			PRINT_ERR("Error with rte_ioat_completed_ops\n");
 			return -1;
 		}
 		for (i = 0; i < RTE_DIM(srcs); i++) {
diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index 7067b352f..5b2c47e8c 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -74,19 +74,19 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 		int fence);
 
 /**
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  *
  * This API is used to write the "doorbell" to the hardware to trigger it
- * to begin the copy operations previously enqueued by rte_ioat_enqueue_copy()
+ * to begin the operations previously enqueued by rte_ioat_enqueue_copy()
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
  */
 static inline void
-rte_ioat_do_copies(int dev_id);
+rte_ioat_perform_ops(int dev_id);
 
 /**
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  *
  * If the hdls_disable option was not set when the device was configured,
  * the function will return to the caller the user-provided "handles" for
@@ -104,11 +104,11 @@ rte_ioat_do_copies(int dev_id);
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param src_hdls
- *   Array to hold the source handle parameters of the completed copies.
+ *   Array to hold the source handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @param dst_hdls
- *   Array to hold the destination handle parameters of the completed copies.
+ *   Array to hold the destination handle parameters of the completed ops.
  *   NOTE: If hdls_disable configuration option for the device is set, this
  *   parameter is ignored.
  * @return
@@ -117,7 +117,7 @@ rte_ioat_do_copies(int dev_id);
  *   to the src_hdls and dst_hdls array parameters.
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls);
 
 /* include the implementation details from a separate file */
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 4b7bdb8e2..b155d79c4 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -83,10 +83,10 @@ rte_ioat_enqueue_copy(int dev_id, phys_addr_t src, phys_addr_t dst,
 }
 
 /*
- * Trigger hardware to begin performing enqueued copy operations
+ * Trigger hardware to begin performing enqueued operations
  */
 static inline void
-rte_ioat_do_copies(int dev_id)
+rte_ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
 			(struct rte_ioat_rawdev *)rte_rawdevs[dev_id].dev_private;
@@ -114,10 +114,10 @@ rte_ioat_get_last_completed(struct rte_ioat_rawdev *ioat, int *error)
 }
 
 /*
- * Returns details of copy operations that have been completed
+ * Returns details of operations that have been completed
  */
 static inline int
-rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+rte_ioat_completed_ops(int dev_id, uint8_t max_copies,
 		uintptr_t *src_hdls, uintptr_t *dst_hdls)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -165,4 +165,16 @@ rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
 	return count;
 }
 
+static inline void
+__rte_deprecated_msg("use rte_ioat_perform_ops() instead")
+rte_ioat_do_copies(int dev_id) { rte_ioat_perform_ops(dev_id); }
+
+static inline int
+__rte_deprecated_msg("use rte_ioat_completed_ops() instead")
+rte_ioat_completed_copies(int dev_id, uint8_t max_copies,
+		uintptr_t *src_hdls, uintptr_t *dst_hdls)
+{
+	return rte_ioat_completed_ops(dev_id, max_copies, src_hdls, dst_hdls);
+}
+
 #endif /* _RTE_IOAT_RAWDEV_FNS_H_ */
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 288a75c7b..67f75737b 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -406,7 +406,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config)
 			nb_enq = ioat_enqueue_packets(pkts_burst,
 				nb_rx, rx_config->ioat_ids[i]);
 			if (nb_enq > 0)
-				rte_ioat_do_copies(rx_config->ioat_ids[i]);
+				rte_ioat_perform_ops(rx_config->ioat_ids[i]);
 		} else {
 			/* Perform packet software copy, free source packets */
 			int ret;
@@ -452,7 +452,7 @@ ioat_tx_port(struct rxtx_port_config *tx_config)
 	for (i = 0; i < tx_config->nb_queues; i++) {
 		if (copy_mode == COPY_MODE_IOAT_NUM) {
 			/* Deque the mbufs from IOAT device. */
-			nb_dq = rte_ioat_completed_copies(
+			nb_dq = rte_ioat_completed_ops(
 				tx_config->ioat_ids[i], MAX_PKT_BURST,
 				(void *)mbufs_src, (void *)mbufs_dst);
 		} else {
diff --git a/lib/librte_eal/include/rte_common.h b/lib/librte_eal/include/rte_common.h
index 8f487a563..2920255fc 100644
--- a/lib/librte_eal/include/rte_common.h
+++ b/lib/librte_eal/include/rte_common.h
@@ -85,6 +85,7 @@ typedef uint16_t unaligned_uint16_t;
 
 /******* Macro to mark functions and fields scheduled for removal *****/
 #define __rte_deprecated	__attribute__((__deprecated__))
+#define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
 
 /**
  * Mark a function or variable to a weak reference.
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
  2020-10-05 16:26  2% ` [dpdk-dev] [PATCH v3 " Vikas Gupta
@ 2020-10-07 16:45  2%   ` Vikas Gupta
  2020-10-07 17:18  2%     ` [dpdk-dev] [PATCH v5 " Vikas Gupta
  0 siblings, 1 reply; 200+ results
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
  To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta

Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit. 
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.

The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework. 

The patchset has been tested on the above mentioned SoCs.

Regards,
Vikas

Changes from v0->v1: 
      Updated the ABI version in file .../crypto/bcmfs/rte_pmd_bcmfs_version.map

Changes from v1->v2:
	- Fix compilation errors and coding style warnings.
	- Use global test crypto suite suggested by Adam Dybkowski

Changes from v2->v3:
	- Release notes updated.
	- bcmfs.rst updated with missing information about installation.
	- Review comments from patch1 from v2 addressed.
	- Updated description about dependency of PMD driver on VFIO_PRESENT.
	- Fixed typo in bcmfs_hw_defs.h (comments on patch3 from v2 addressed)
	- Comments on patch6 from v2 addressed and capability list is fixed.
		Removed redundant enums and macros from the file
		bcmfs_sym_defs.h and updated other impacted APIs accordingly.
		patch7 too is updated due to removal of redundancy.
	  Thanks! to Akhil for pointing out the redundancy.
	- Fix minor code style issues in few files as part of review.

Changes from v3->v4:
	- Code style issues fixed.
	- Change of barrier API in bcmfs4_rm.c and bcmfs5_rm.c

Vikas Gupta (8):
  crypto/bcmfs: add BCMFS driver
  crypto/bcmfs: add vfio support
  crypto/bcmfs: add queue pair management API
  crypto/bcmfs: add HW queue pair operations
  crypto/bcmfs: create a symmetric cryptodev
  crypto/bcmfs: add session handling and capabilities
  crypto/bcmfs: add crypto HW module
  crypto/bcmfs: add crypto pmd into cryptodev test

 MAINTAINERS                                   |    7 +
 app/test/test_cryptodev.c                     |   17 +
 app/test/test_cryptodev.h                     |    1 +
 doc/guides/cryptodevs/bcmfs.rst               |  109 ++
 doc/guides/cryptodevs/features/bcmfs.ini      |   56 +
 doc/guides/cryptodevs/index.rst               |    1 +
 doc/guides/rel_notes/release_20_11.rst        |    5 +
 drivers/crypto/bcmfs/bcmfs_dev_msg.h          |   29 +
 drivers/crypto/bcmfs/bcmfs_device.c           |  332 +++++
 drivers/crypto/bcmfs/bcmfs_device.h           |   76 ++
 drivers/crypto/bcmfs/bcmfs_hw_defs.h          |   32 +
 drivers/crypto/bcmfs/bcmfs_logs.c             |   38 +
 drivers/crypto/bcmfs/bcmfs_logs.h             |   34 +
 drivers/crypto/bcmfs/bcmfs_qp.c               |  383 ++++++
 drivers/crypto/bcmfs/bcmfs_qp.h               |  142 ++
 drivers/crypto/bcmfs/bcmfs_sym.c              |  289 +++++
 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c |  764 +++++++++++
 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h |   16 +
 drivers/crypto/bcmfs/bcmfs_sym_defs.h         |   34 +
 drivers/crypto/bcmfs/bcmfs_sym_engine.c       | 1155 +++++++++++++++++
 drivers/crypto/bcmfs/bcmfs_sym_engine.h       |  115 ++
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c          |  426 ++++++
 drivers/crypto/bcmfs/bcmfs_sym_pmd.h          |   38 +
 drivers/crypto/bcmfs/bcmfs_sym_req.h          |   62 +
 drivers/crypto/bcmfs/bcmfs_sym_session.c      |  282 ++++
 drivers/crypto/bcmfs/bcmfs_sym_session.h      |  109 ++
 drivers/crypto/bcmfs/bcmfs_vfio.c             |  107 ++
 drivers/crypto/bcmfs/bcmfs_vfio.h             |   17 +
 drivers/crypto/bcmfs/hw/bcmfs4_rm.c           |  743 +++++++++++
 drivers/crypto/bcmfs/hw/bcmfs5_rm.c           |  677 ++++++++++
 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c     |   82 ++
 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h     |   51 +
 drivers/crypto/bcmfs/meson.build              |   20 +
 .../crypto/bcmfs/rte_pmd_bcmfs_version.map    |    3 +
 drivers/crypto/meson.build                    |    1 +
 35 files changed, 6253 insertions(+)
 create mode 100644 doc/guides/cryptodevs/bcmfs.rst
 create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
 create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
 create mode 100644 drivers/crypto/bcmfs/meson.build
 create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map

-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] 19.11 ABI changes
@ 2020-10-07  6:05  4% Денис Коновалов
  0 siblings, 0 replies; 200+ results
From: Денис Коновалов @ 2020-10-07  6:05 UTC (permalink / raw)
  To: dev

Hello!
In 17.05 I get thread_id like that:
worker_threads.push_back(worker);
pthread_setname_np(lcore_config[lcore_id].thread_id, worker_name.c_str());
But in 19.11 lcore_config now private and I don't see any method in
rte_lcore.h for getting thread_id. How can I do that? Thank you.

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 2/2] baseband/fpga_lte_fec: add API change in release note
    2020-10-07 12:18  4% ` [dpdk-dev] [PATCH 1/2] baseband/fpga_5gnr_fec: add " Maxime Coquelin
@ 2020-10-07 12:18  4% ` Maxime Coquelin
  1 sibling, 0 replies; 200+ results
From: Maxime Coquelin @ 2020-10-07 12:18 UTC (permalink / raw)
  To: dev, akhil.goyal, thomas, nicolas.chautru; +Cc: Maxime Coquelin

Fixes: 717edebcd394 ("baseband/fpga_lte_fec: fix API naming")

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 doc/guides/rel_notes/release_20_11.rst | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index b3dd9a3646..98ae729a27 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -233,6 +233,10 @@ API Changes
   ``rte_fpga_5gnr_fec_configure`` and structure ``fpga_5gnr_fec_conf`` to
   ``rte_fpga_5gnr_fec_conf``.
 
+* baseband/fpga_lte_fec: Renamed function ``fpga_lte_fec_configure`` to
+  ``rte_fpga_lte_fec_configure`` and structure ``fpga_lte_fec_conf`` to
+  ``rte_fpga_lte_fec_conf``.
+
 
 ABI Changes
 -----------
-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 1/2] baseband/fpga_5gnr_fec: add API change in release note
  @ 2020-10-07 12:18  4% ` Maxime Coquelin
  2020-10-07 12:18  4% ` [dpdk-dev] [PATCH 2/2] baseband/fpga_lte_fec: " Maxime Coquelin
  1 sibling, 0 replies; 200+ results
From: Maxime Coquelin @ 2020-10-07 12:18 UTC (permalink / raw)
  To: dev, akhil.goyal, thomas, nicolas.chautru; +Cc: Maxime Coquelin

Fixes: 7fd60723065e ("baseband/fpga_5gnr_fec: fix API naming")

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 doc/guides/rel_notes/release_20_11.rst | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c0a3d76005..b3dd9a3646 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -229,6 +229,10 @@ API Changes
 
 * ipsec: ``RTE_SATP_LOG2_NUM`` has been dropped from ``enum``
 
+* baseband/fpga_5gnr_fec: Renamed function ``fpga_5gnr_fec_configure`` to
+  ``rte_fpga_5gnr_fec_configure`` and structure ``fpga_5gnr_fec_conf`` to
+  ``rte_fpga_5gnr_fec_conf``.
+
 
 ABI Changes
 -----------
-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] cryptodev: revert ABI compatibility for ChaCha20-Poly1305
  2020-10-07 10:41  4%   ` Doherty, Declan
@ 2020-10-07 12:06  4%     ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-10-07 12:06 UTC (permalink / raw)
  To: Doherty, Declan
  Cc: Adam Dybkowski, dev, Trahe, Fiona, Akhil Goyal, Arek Kusztal,
	Thomas Monjalon, Ray Kinsella

On Wed, Oct 7, 2020 at 12:41 PM Doherty, Declan
<declan.doherty@intel.com> wrote:
> >> @@ -789,33 +781,9 @@ rte_cryptodev_stats_reset(uint8_t dev_id);
> >>    * the last valid element has it's op field set to
> >>    * RTE_CRYPTO_OP_TYPE_UNDEFINED.
> >>    */
> >> -
> >> -void
> >> +extern void
> > Nit: no need for extern.
> Hey David, I think the cryptodev API consistently uses extern on nearly
> all it's function declarations. I'd proposed we do a separate patchset
> which removes extern on all function declarations to make it more
> consistent with the rest of DPDKs libraries.

Ok for me.


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets
  2020-10-07 10:53  3%     ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Dekel Peled
  2020-10-07 10:54  8%       ` [dpdk-dev] [PATCH v4 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
@ 2020-10-07 11:15  0%       ` Ori Kam
  2020-10-12 10:42  3%       ` [dpdk-dev] [PATCH v5 " Dekel Peled
  2 siblings, 0 replies; 200+ results
From: Ori Kam @ 2020-10-07 11:15 UTC (permalink / raw)
  To: Dekel Peled, NBU-Contact-Thomas Monjalon, ferruh.yigit,
	arybchenko, konstantin.ananyev, olivier.matz, wenzhuo.lu,
	beilei.xing, bernard.iremonger, Matan Azrad, Shahaf Shuler,
	Slava Ovsiienko
  Cc: dev

Hi Dekel,

> -----Original Message-----
> From: Dekel Peled <dekelp@nvidia.com>
> Sent: Wednesday, October 7, 2020 1:54 PM
> Subject: [PATCH v4 00/11] support match on L3 fragmented packets
> 
> This series implements support of matching on packets based on the
> fragmentation attribute of the packet, i.e. if packet is a fragment
> of a larger packet, or the opposite - packet is not a fragment.
> 
> In ethdev, add API to support IPv6 extension headers, and specifically
> the IPv6 fragment extension header item.
> In MLX5 PMD, support match on IPv4 fragmented packets, IPv6 fragmented
> packets, and IPv6 fragment extension header item.
> Testpmd CLI is updated accordingly.
> Documentation is updated accordingly.
> 
> ---
> v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
> v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid
> ABI breakage.
> v4: update rte_flow documentation to clarify use of IPv6 extension header
> flags.
> ---
> 
> Dekel Peled (11):
>   ethdev: add extensions attributes to IPv6 item
>   ethdev: add IPv6 fragment extension header item
>   app/testpmd: support IPv4 fragments
>   app/testpmd: support IPv6 fragments
>   app/testpmd: support IPv6 fragment extension item
>   net/mlx5: remove handling of ICMP fragmented packets
>   net/mlx5: support match on IPv4 fragment packets
>   net/mlx5: support match on IPv6 fragment packets
>   net/mlx5: support match on IPv6 fragment ext. item
>   doc: update release notes for MLX5 L3 frag support
>   net/mlx5: enforce limitation on IPv6 next proto
> 
>  app/test-pmd/cmdline_flow.c            |  53 +++++
>  doc/guides/nics/mlx5.rst               |   7 +
>  doc/guides/prog_guide/rte_flow.rst     |  34 ++-
>  doc/guides/rel_notes/release_20_11.rst |  10 +
>  drivers/net/mlx5/mlx5_flow.c           |  62 ++++--
>  drivers/net/mlx5/mlx5_flow.h           |  14 ++
>  drivers/net/mlx5/mlx5_flow_dv.c        | 382
> +++++++++++++++++++++++++++++----
>  drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
>  lib/librte_ethdev/rte_flow.c           |   1 +
>  lib/librte_ethdev/rte_flow.h           |  45 +++-
>  lib/librte_ip_frag/rte_ip_frag.h       |  26 +--
>  lib/librte_net/rte_ip.h                |  26 ++-
>  12 files changed, 579 insertions(+), 90 deletions(-)
> 
> --
> 1.8.3.1

Series-acked-by:  Ori Kam <orika@nvidia.com>
Thanks,
Ori


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4 10/11] doc: update release notes for MLX5 L3 frag support
  2020-10-07 10:53  3%     ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Dekel Peled
@ 2020-10-07 10:54  8%       ` Dekel Peled
  2020-10-07 11:15  0%       ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Ori Kam
  2020-10-12 10:42  3%       ` [dpdk-dev] [PATCH v5 " Dekel Peled
  2 siblings, 0 replies; 200+ results
From: Dekel Peled @ 2020-10-07 10:54 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This patch updates 20.11 release notes with the changes included in
patches of this series:
1) MLX5 support of matching on IPv4/IPv6 fragmented/non-fragmented
   packets.
2) ABI change in ethdev struct rte_flow_item_ipv6.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0b2a370..f39f13b 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -109,6 +109,11 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on fragmented/non-fragmented IPv4/IPv6 packets.
 
 Removed Items
 -------------
@@ -240,6 +245,11 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * Added extensions' attributes to struct ``rte_flow_item_ipv6``.
+    A set of additional values added to struct, indicating the existence of
+    every defined extension header type.
+    Applications should use the new values for identification of existing
+    extensions in the packet header.
 
 Known Issues
 ------------
-- 
1.8.3.1


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets
  2020-10-05  8:35  3%   ` [dpdk-dev] [PATCH v3 00/11] support match on L3 fragmented packets Dekel Peled
  2020-10-05  8:35  8%     ` [dpdk-dev] [PATCH v3 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
@ 2020-10-07 10:53  3%     ` Dekel Peled
  2020-10-07 10:54  8%       ` [dpdk-dev] [PATCH v4 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
                         ` (2 more replies)
  1 sibling, 3 replies; 200+ results
From: Dekel Peled @ 2020-10-07 10:53 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This series implements support of matching on packets based on the
fragmentation attribute of the packet, i.e. if packet is a fragment
of a larger packet, or the opposite - packet is not a fragment.

In ethdev, add API to support IPv6 extension headers, and specifically
the IPv6 fragment extension header item.
In MLX5 PMD, support match on IPv4 fragmented packets, IPv6 fragmented
packets, and IPv6 fragment extension header item.
Testpmd CLI is updated accordingly.
Documentation is updated accordingly.

---
v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid ABI breakage.
v4: update rte_flow documentation to clarify use of IPv6 extension header flags.
---

Dekel Peled (11):
  ethdev: add extensions attributes to IPv6 item
  ethdev: add IPv6 fragment extension header item
  app/testpmd: support IPv4 fragments
  app/testpmd: support IPv6 fragments
  app/testpmd: support IPv6 fragment extension item
  net/mlx5: remove handling of ICMP fragmented packets
  net/mlx5: support match on IPv4 fragment packets
  net/mlx5: support match on IPv6 fragment packets
  net/mlx5: support match on IPv6 fragment ext. item
  doc: update release notes for MLX5 L3 frag support
  net/mlx5: enforce limitation on IPv6 next proto

 app/test-pmd/cmdline_flow.c            |  53 +++++
 doc/guides/nics/mlx5.rst               |   7 +
 doc/guides/prog_guide/rte_flow.rst     |  34 ++-
 doc/guides/rel_notes/release_20_11.rst |  10 +
 drivers/net/mlx5/mlx5_flow.c           |  62 ++++--
 drivers/net/mlx5/mlx5_flow.h           |  14 ++
 drivers/net/mlx5/mlx5_flow_dv.c        | 382 +++++++++++++++++++++++++++++----
 drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
 lib/librte_ethdev/rte_flow.c           |   1 +
 lib/librte_ethdev/rte_flow.h           |  45 +++-
 lib/librte_ip_frag/rte_ip_frag.h       |  26 +--
 lib/librte_net/rte_ip.h                |  26 ++-
 12 files changed, 579 insertions(+), 90 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] cryptodev: revert ABI compatibility for ChaCha20-Poly1305
  2020-10-06 12:32  9% ` David Marchand
  2020-10-06 14:27  4%   ` Dybkowski, AdamX
@ 2020-10-07 10:41  4%   ` Doherty, Declan
  2020-10-07 12:06  4%     ` David Marchand
  1 sibling, 1 reply; 200+ results
From: Doherty, Declan @ 2020-10-07 10:41 UTC (permalink / raw)
  To: David Marchand, Adam Dybkowski
  Cc: dev, Trahe, Fiona, Akhil Goyal, Arek Kusztal, Thomas Monjalon,
	Ray Kinsella


On 06/10/2020 1:32 PM, David Marchand wrote:
> For the title, I would suggest: "cryptodev: remove v20 ABI compatibility"
>
> You did this change using a revert, but still, we can avoid restoring
> coding style issues, see nits below.
>
>
> On Fri, Aug 14, 2020 at 12:00 PM Adam Dybkowski
> <adamx.dybkowski@intel.com> wrote:
>> This reverts commit a0f0de06d457753c94688d551a6e8659b4d4e041 as the
>> rte_cryptodev_info_get function versioning was a temporary solution
>> to maintain ABI compatibility for ChaCha20-Poly1305 and is not
>> needed in 20.11.
>>
...
>>
>>   int
>>   rte_cryptodev_callback_register(uint8_t dev_id,
>> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
>> index 7b3ebc20f..26abd0c52 100644
>> --- a/lib/librte_cryptodev/rte_cryptodev.h
>> +++ b/lib/librte_cryptodev/rte_cryptodev.h
>> @@ -219,14 +219,6 @@ struct rte_cryptodev_asym_capability_idx {
>>    *   - Return NULL if the capability not exist.
>>    */
>>   const struct rte_cryptodev_symmetric_capability *
>> -rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
>> -               const struct rte_cryptodev_sym_capability_idx *idx);
>> -
>> -const struct rte_cryptodev_symmetric_capability *
>> -rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
>> -               const struct rte_cryptodev_sym_capability_idx *idx);
>> -
>> -const struct rte_cryptodev_symmetric_capability *
>>   rte_cryptodev_sym_capability_get(uint8_t dev_id,
>>                  const struct rte_cryptodev_sym_capability_idx *idx);
>>
>> @@ -789,33 +781,9 @@ rte_cryptodev_stats_reset(uint8_t dev_id);
>>    * the last valid element has it's op field set to
>>    * RTE_CRYPTO_OP_TYPE_UNDEFINED.
>>    */
>> -
>> -void
>> +extern void
> Nit: no need for extern.
Hey David, I think the cryptodev API consistently uses extern on nearly 
all it's function declarations. I'd proposed we do a separate patchset 
which removes extern on all function declarations to make it more 
consistent with the rest of DPDKs libraries.
>   /**
>    * Register a callback function for specific device id.
...
> Thanks for working on this.
> Note to others watching ABI, with this, it should be the last patch
> about DPDK_20 ABI.
>
>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3] mbuf: minor cleanup
  @ 2020-10-07  9:16  0% ` Olivier Matz
  0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2020-10-07  9:16 UTC (permalink / raw)
  To: Morten Brørup; +Cc: thomas, dev

Hi Morten,

Thanks for this cleanup. Please see some comments below.

On Wed, Sep 16, 2020 at 12:40:13PM +0200, Morten Brørup wrote:
> The mbuf header files had some commenting style errors that affected the
> API documentation.
> Also, the RTE_ prefix was missing on a macro and a definition.
> 
> Note: This patch does not touch the offload and attachment flags that are
> also missing the RTE_ prefix.
> 
> Changes only affecting documentation:
> * Removed the MBUF_INVALID_PORT definition from rte_mbuf.h; it is
>   already defined in rte_mbuf_core.h.
>   This removal also reestablished the description of the
>   rte_pktmbuf_reset() function.
> * Corrected the comment related to RTE_MBUF_MAX_NB_SEGS.
> * Corrected the comment related to PKT_TX_QINQ_PKT.
> 
> Changes regarding missing RTE_ prefix:
> * Converted the MBUF_RAW_ALLOC_CHECK() macro to an
>   __rte_mbuf_raw_sanity_check() inline function.
>   Added backwards compatible macro with the original name.
> * Renamed the MBUF_INVALID_PORT definition to RTE_MBUF_PORT_INVALID.
>   Added backwards compatible definition with the original name.
> 
> v2:
> * Use RTE_MBUF_PORT_INVALID instead of MBUF_INVALID_PORT in rte_mbuf.c.
> 
> v3:
> * The functions/macros used in __rte_mbuf_raw_sanity_check() require
>   RTE_ENABLE_ASSERT or RTE_LIBRTE_MBUF_DEBUG, or they don't use the mbuf
>   parameter, which generates a compiler waning. So mark the mbuf parameter
>   __rte_unused if none of them are defined.
> 
> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> ---
>  doc/guides/rel_notes/deprecation.rst |  7 ----
>  lib/librte_mbuf/rte_mbuf.c           |  4 +-
>  lib/librte_mbuf/rte_mbuf.h           | 55 +++++++++++++++++++---------
>  lib/librte_mbuf/rte_mbuf_core.h      |  9 +++--
>  4 files changed, 45 insertions(+), 30 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 279eccb04..88d7d0761 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -294,13 +294,6 @@ Deprecation Notices
>    - https://patches.dpdk.org/patch/71457/
>    - https://patches.dpdk.org/patch/71456/
>  
> -* rawdev: The rawdev APIs which take a device-specific structure as
> -  parameter directly, or indirectly via a "private" pointer inside another
> -  structure, will be modified to take an additional parameter of the
> -  structure size. The affected APIs will include ``rte_rawdev_info_get``,
> -  ``rte_rawdev_configure``, ``rte_rawdev_queue_conf_get`` and
> -  ``rte_rawdev_queue_setup``.
> -
>  * acl: ``RTE_ACL_CLASSIFY_NUM`` enum value will be removed.
>    This enum value is not used inside DPDK, while it prevents to add new
>    classify algorithms without causing an ABI breakage.

I think this change is not related.

This makes me think that a deprecation notice could be done for the
old names without the RTE_ prefix, to be removed in 21.11.


> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 8a456e5e6..53a015311 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -104,7 +104,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
>  	/* init some constant fields */
>  	m->pool = mp;
>  	m->nb_segs = 1;
> -	m->port = MBUF_INVALID_PORT;
> +	m->port = RTE_MBUF_PORT_INVALID;
>  	rte_mbuf_refcnt_set(m, 1);
>  	m->next = NULL;
>  }
> @@ -207,7 +207,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
>  	/* init some constant fields */
>  	m->pool = mp;
>  	m->nb_segs = 1;
> -	m->port = MBUF_INVALID_PORT;
> +	m->port = RTE_MBUF_PORT_INVALID;
>  	m->ol_flags = EXT_ATTACHED_MBUF;
>  	rte_mbuf_refcnt_set(m, 1);
>  	m->next = NULL;
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 7259575a7..406d3abb2 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -554,12 +554,36 @@ __rte_experimental
>  int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
>  		   const char **reason);
>  
> -#define MBUF_RAW_ALLOC_CHECK(m) do {				\
> -	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);		\
> -	RTE_ASSERT((m)->next == NULL);				\
> -	RTE_ASSERT((m)->nb_segs == 1);				\
> -	__rte_mbuf_sanity_check(m, 0);				\
> -} while (0)
> +#if defined(RTE_ENABLE_ASSERT) || defined(RTE_LIBRTE_MBUF_DEBUG)

I don't see why this #if is needed. Wouldn't it work to have only
one function definition with the __rte_unused attribute?

> +/**
> + * Sanity checks on a reinitialized mbuf.
> + *
> + * Check the consistency of the given reinitialized mbuf.
> + * The function will cause a panic if corruption is detected.
> + *
> + * Check that the mbuf is properly reinitialized (refcnt=1, next=NULL,
> + * nb_segs=1), as done by rte_pktmbuf_prefree_seg().
> + *

Maybe indicate that these checks are only done when debug is on.

> + * @param m
> + *   The mbuf to be checked.
> + */
> +static __rte_always_inline void
> +__rte_mbuf_raw_sanity_check(const struct rte_mbuf *m)
> +{
> +	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
> +	RTE_ASSERT(m->next == NULL);
> +	RTE_ASSERT(m->nb_segs == 1);
> +	__rte_mbuf_sanity_check(m, 0);
> +}
> +#else
> +static __rte_always_inline void
> +__rte_mbuf_raw_sanity_check(const struct rte_mbuf *m __rte_unused)
> +{
> +    /* Nothing here. */
> +}
> +#endif
> +/** For backwards compatibility. */
> +#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)

It looks that MBUF_RAW_ALLOC_CHECK() is also used in drivers/net/sfc,
I think it should be updated too.

>  
>  /**
>   * Allocate an uninitialized mbuf from mempool *mp*.
> @@ -586,7 +610,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
>  
>  	if (rte_mempool_get(mp, (void **)&m) < 0)
>  		return NULL;
> -	MBUF_RAW_ALLOC_CHECK(m);
> +	__rte_mbuf_raw_sanity_check(m);
>  	return m;
>  }
>  
> @@ -609,10 +633,7 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
>  {
>  	RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
>  		  (!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m)));
> -	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
> -	RTE_ASSERT(m->next == NULL);
> -	RTE_ASSERT(m->nb_segs == 1);
> -	__rte_mbuf_sanity_check(m, 0);
> +	__rte_mbuf_raw_sanity_check(m);
>  	rte_mempool_put(m->pool, m);
>  }
>  
> @@ -858,8 +879,6 @@ static inline void rte_pktmbuf_reset_headroom(struct rte_mbuf *m)
>   * @param m
>   *   The packet mbuf to be reset.
>   */
> -#define MBUF_INVALID_PORT UINT16_MAX
> -
>  static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
>  {
>  	m->next = NULL;
> @@ -868,7 +887,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
>  	m->vlan_tci = 0;
>  	m->vlan_tci_outer = 0;
>  	m->nb_segs = 1;
> -	m->port = MBUF_INVALID_PORT;
> +	m->port = RTE_MBUF_PORT_INVALID;
>  
>  	m->ol_flags &= EXT_ATTACHED_MBUF;
>  	m->packet_type = 0;
> @@ -931,22 +950,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
>  	switch (count % 4) {
>  	case 0:
>  		while (idx != count) {
> -			MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> +			__rte_mbuf_raw_sanity_check(mbufs[idx]);
>  			rte_pktmbuf_reset(mbufs[idx]);
>  			idx++;
>  			/* fall-through */
>  	case 3:
> -			MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> +			__rte_mbuf_raw_sanity_check(mbufs[idx]);
>  			rte_pktmbuf_reset(mbufs[idx]);
>  			idx++;
>  			/* fall-through */
>  	case 2:
> -			MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> +			__rte_mbuf_raw_sanity_check(mbufs[idx]);
>  			rte_pktmbuf_reset(mbufs[idx]);
>  			idx++;
>  			/* fall-through */
>  	case 1:
> -			MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> +			__rte_mbuf_raw_sanity_check(mbufs[idx]);
>  			rte_pktmbuf_reset(mbufs[idx]);
>  			idx++;
>  			/* fall-through */
> diff --git a/lib/librte_mbuf/rte_mbuf_core.h b/lib/librte_mbuf/rte_mbuf_core.h
> index 8cd7137ac..4ac5609e3 100644
> --- a/lib/librte_mbuf/rte_mbuf_core.h
> +++ b/lib/librte_mbuf/rte_mbuf_core.h
> @@ -272,7 +272,7 @@ extern "C" {
>   * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is set.
>   */
>  #define PKT_TX_QINQ        (1ULL << 49)
> -/* this old name is deprecated */
> +/** This old name is deprecated. */
>  #define PKT_TX_QINQ_PKT    PKT_TX_QINQ
>  
>  /**
> @@ -686,7 +686,7 @@ struct rte_mbuf_ext_shared_info {
>  	};
>  };
>  
> -/**< Maximum number of nb_segs allowed. */
> +/** Maximum number of nb_segs allowed. */
>  #define RTE_MBUF_MAX_NB_SEGS	UINT16_MAX
>  
>  /**
> @@ -714,7 +714,10 @@ struct rte_mbuf_ext_shared_info {
>  #define RTE_MBUF_DIRECT(mb) \
>  	(!((mb)->ol_flags & (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF)))
>  
> -#define MBUF_INVALID_PORT UINT16_MAX
> +/** NULL value for the uint16_t port type. */
> +#define RTE_MBUF_PORT_INVALID UINT16_MAX

I don't really like talking about "NULL". What do you think instead of
this wording?

  /** Uninitialized or unspecified port */

> +/** For backwards compatibility. */
> +#define MBUF_INVALID_PORT RTE_MBUF_PORT_INVALID
>  
>  /**
>   * A macro that points to an offset into the data in the mbuf.
> -- 
> 2.17.1
> 

Thanks,
Olivier

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/1] crypto/scheduler: rename slave to worker
  @ 2020-10-06 20:49  0%       ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2020-10-06 20:49 UTC (permalink / raw)
  To: Ruifeng Wang, Adam Dybkowski, dev, fiona.trahe; +Cc: Fan Zhang, nd

> >
> > This patch replaces the usage of the word 'slave' with more
> > appropriate word 'worker' in QAT PMD and Scheduler PMD
> > as well as in their docs. Also the test app was modified
> > to use the new wording.
> >
> > The Scheduler PMD's public API was modified according to the
> > previous deprecation notice:
> > rte_cryptodev_scheduler_slave_attach is now called
> > rte_cryptodev_scheduler_worker_attach,
> > rte_cryptodev_scheduler_slave_detach is
> > rte_cryptodev_scheduler_worker_detach,
> > rte_cryptodev_scheduler_slaves_get is
> > rte_cryptodev_scheduler_workers_get.
> >
> > Also, the configuration value
> > RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES
> > was renamed to RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS.
> >
> > Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
> > Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

> 
> Looks good from ABI perspective.
> 
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>

Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

Applied to dpdk-next-crypto

Thanks!

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] ipsec: remove experimental tag
  2020-10-06 20:11  0%       ` Akhil Goyal
@ 2020-10-06 20:29  0%         ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2020-10-06 20:29 UTC (permalink / raw)
  To: Kinsella, Ray, dev

> > On 16/09/2020 12:22, Ananyev, Konstantin wrote:
> > >
> > >> Since librte_ipsec was first introduced in 19.02 and there were no changes
> > >> in it's public API since 19.11, it should be considered mature enough to
> > >> remove the 'experimental' tag from it.
> > >> The RTE_SATP_LOG2_NUM enum is also being dropped from
> rte_ipsec_sa.h
> > to
> > >> avoid possible ABI problems in the future.
> > >>
> > >> ---
> > >
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > >
> > >> 2.25.1
> > >
> >
> > Acked-by: Ray Kinsella <mdr@ashroe.eu>
> 
> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

Applied to dpdk-next-crypto

Thanks.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] ipsec: remove experimental tag
  2020-10-05  8:59  0%     ` Kinsella, Ray
@ 2020-10-06 20:11  0%       ` Akhil Goyal
  2020-10-06 20:29  0%         ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-06 20:11 UTC (permalink / raw)
  To: Kinsella, Ray, dev

> 
> On 16/09/2020 12:22, Ananyev, Konstantin wrote:
> >
> >> Since librte_ipsec was first introduced in 19.02 and there were no changes
> >> in it's public API since 19.11, it should be considered mature enough to
> >> remove the 'experimental' tag from it.
> >> The RTE_SATP_LOG2_NUM enum is also being dropped from rte_ipsec_sa.h
> to
> >> avoid possible ABI problems in the future.
> >>
> >> ---
> >
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >
> >> 2.25.1
> >
> 
> Acked-by: Ray Kinsella <mdr@ashroe.eu>

Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods
  2020-10-06 15:05  3%   ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods David Marchand
@ 2020-10-06 16:07  3%     ` Ananyev, Konstantin
  2020-10-14  9:23  4%       ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2020-10-06 16:07 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Jerin Jacob Kollanukkaran,
	Ruifeng Wang (Arm Technology China),
	Medvedkin, Vladimir, Thomas Monjalon, Ray Kinsella, Richardson,
	Bruce


> 
> On Mon, Oct 5, 2020 at 9:44 PM Konstantin Ananyev
> <konstantin.ananyev@intel.com> wrote:
> >
> > These patch series introduce support of AVX512 specific classify
> > implementation for ACL library.
> > It adds two new algorithms:
> >  - RTE_ACL_CLASSIFY_AVX512X16 - can process up to 16 flows in parallel.
> >    It uses 256-bit width instructions/registers only
> >    (to avoid frequency level change).
> >    On my SKX box test-acl shows ~15-30% improvement
> >    (depending on rule-set and input burst size)
> >    when switching from AVX2 to AVX512X16 classify algorithms.
> >  - RTE_ACL_CLASSIFY_AVX512X32 - can process up to 32 flows in parallel.
> >    It uses 512-bit width instructions/registers and provides higher
> >    performance then AVX512X16, but can cause frequency level change.
> >    On my SKX box test-acl shows ~50-70% improvement
> >    (depending on rule-set and input burst size)
> >    when switching from AVX2 to AVX512X32 classify algorithms.
> >    ICX and CLX testing showed similar level of speedup.
> >
> > Current AVX512 classify implementation is only supported on x86_64.
> > Note that this series introduce a formal ABI incompatibility
> 
> The only API change I can see is in rte_acl_classify_alg() new error
> code but I don't think we need an announcement for this.
> As for ABI, we are breaking it in this release, so I see no pb.

Cool, I just wanted to underline that patch #3:
https://patches.dpdk.org/patch/79786/
is a formal ABI breakage.

> 
> 
> > with previous versions of ACL library.
> >
> > v2 -> v3:
> >   Fix checkpatch warnings
> >   Split AVX512 algorithm into two and deduplicate common code
> 
> Patch 7 still references a RTE_MACHINE_CPUFLAG flag.
> Can you rework now that those flags have been dropped?
> 

Should be fixed in v4:
https://patches.dpdk.org/project/dpdk/list/?series=12721

One more thing to mention - this series has a dependency on Vladimir's patch:
https://patches.dpdk.org/patch/79310/ ("eal/x86: introduce AVX 512-bit type"),
so CI/travis would still report an error.

Thanks
Konstantin


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4 03/14] acl: remove of unused enum value
  2020-10-06 15:03  3%   ` [dpdk-dev] [PATCH v4 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
@ 2020-10-06 15:03 20%     ` Konstantin Ananyev
  0 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2020-10-06 15:03 UTC (permalink / raw)
  To: dev; +Cc: jerinj, ruifeng.wang, vladimir.medvedkin, Konstantin Ananyev

Removal of unused enum value (RTE_ACL_CLASSIFY_NUM).
This enum value is not used inside DPDK, while it prevents
to add new classify algorithms without causing an ABI breakage.

Note that this change introduce a formal ABI incompatibility
with previous versions of ACL library.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 doc/guides/rel_notes/deprecation.rst   | 4 ----
 doc/guides/rel_notes/release_20_11.rst | 4 ++++
 lib/librte_acl/rte_acl.h               | 1 -
 3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8080a28896..938e967c8f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -209,10 +209,6 @@ Deprecation Notices
   - https://patches.dpdk.org/patch/71457/
   - https://patches.dpdk.org/patch/71456/
 
-* acl: ``RTE_ACL_CLASSIFY_NUM`` enum value will be removed.
-  This enum value is not used inside DPDK, while it prevents to add new
-  classify algorithms without causing an ABI breakage.
-
 * sched: To allow more traffic classes, flexible mapping of pipe queues to
   traffic classes, and subport level configuration of pipes and queues
   changes will be made to macros, data structures and API functions defined
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6d8c24413d..e0de60c0c2 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -210,6 +210,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* acl: ``RTE_ACL_CLASSIFY_NUM`` enum value has been removed.
+  This enum value was not used inside DPDK, while it prevented to add new
+  classify algorithms without causing an ABI breakage.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_acl/rte_acl.h b/lib/librte_acl/rte_acl.h
index aa22e70c6e..b814423a63 100644
--- a/lib/librte_acl/rte_acl.h
+++ b/lib/librte_acl/rte_acl.h
@@ -241,7 +241,6 @@ enum rte_acl_classify_alg {
 	RTE_ACL_CLASSIFY_AVX2 = 3,    /**< requires AVX2 support. */
 	RTE_ACL_CLASSIFY_NEON = 4,    /**< requires NEON support. */
 	RTE_ACL_CLASSIFY_ALTIVEC = 5,    /**< requires ALTIVEC support. */
-	RTE_ACL_CLASSIFY_NUM          /* should always be the last one. */
 };
 
 /**
-- 
2.17.1


^ permalink raw reply	[relevance 20%]

* [dpdk-dev] [PATCH v4 00/14] acl: introduce AVX512 classify methods
  2020-10-05 18:45  3% ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
  2020-10-05 18:45 20%   ` [dpdk-dev] [PATCH v3 03/14] acl: remove of unused enum value Konstantin Ananyev
@ 2020-10-06 15:03  3%   ` Konstantin Ananyev
  2020-10-06 15:03 20%     ` [dpdk-dev] [PATCH v4 03/14] acl: remove of unused enum value Konstantin Ananyev
  2020-10-06 15:05  3%   ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods David Marchand
  2 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2020-10-06 15:03 UTC (permalink / raw)
  To: dev; +Cc: jerinj, ruifeng.wang, vladimir.medvedkin, Konstantin Ananyev

These patch series introduce support of AVX512 specific classify
implementation for ACL library.
It adds two new algorithms:
 - RTE_ACL_CLASSIFY_AVX512X16 - can process up to 16 flows in parallel.
   It uses 256-bit width instructions/registers only
   (to avoid frequency level change).
   On my SKX box test-acl shows ~15-30% improvement
   (depending on rule-set and input burst size)
   when switching from AVX2 to AVX512X16 classify algorithms.
 - RTE_ACL_CLASSIFY_AVX512X32 - can process up to 32 flows in parallel.
   It uses 512-bit width instructions/registers and provides higher
   performance then AVX512X16, but can cause frequency level change.
   On my SKX box test-acl shows ~50-70% improvement
   (depending on rule-set and input burst size)
   when switching from AVX2 to AVX512X32 classify algorithms.
   ICX and CLX testing showed similar level of speedup.

Current AVX512 classify implementation is only supported on x86_64.
Note that this series introduce a formal ABI incompatibility
with previous versions of ACL library.

Depends-on: patch-79310 ("eal/x86: introduce AVX 512-bit type")

v3 -> v4
  Fix problems with meson 0.47
  Updates to conform latest changes in the mainline
  (removal of RTE_MACHINE_CPUFLAG_*)
  Fix checkpatch warnings

v2 -> v3:
  Fix checkpatch warnings
  Split AVX512 algorithm into two and deduplicate common code
v1 -> v2:
  Deduplicated 8/16 code paths as much as possible
  Updated default algorithm selection
    Removed library constructor to make it easier integrate with
    https://patches.dpdk.org/project/dpdk/list/?series=11831
  Updated docs


Konstantin Ananyev (14):
  acl: fix x86 build when compiler doesn't support AVX2
  doc: fix missing classify methods in ACL guide
  acl: remove of unused enum value
  acl: remove library constructor
  app/acl: few small improvements
  test/acl: expand classify test coverage
  acl: add infrastructure to support AVX512 classify
  acl: introduce 256-bit width AVX512 classify implementation
  acl: update default classify algorithm selection
  acl: introduce 512-bit width AVX512 classify implementation
  acl: for AVX512 classify use 4B load whenever possible
  acl: deduplicate AVX512 code paths
  test/acl: add AVX512 classify support
  app/acl: add AVX512 classify support

 app/test-acl/main.c                           |  23 +-
 app/test/test_acl.c                           | 105 ++--
 config/x86/meson.build                        |   3 +-
 .../prog_guide/packet_classif_access_ctrl.rst |  20 +
 doc/guides/rel_notes/deprecation.rst          |   4 -
 doc/guides/rel_notes/release_20_11.rst        |  12 +
 lib/librte_acl/acl.h                          |  16 +
 lib/librte_acl/acl_bld.c                      |  34 ++
 lib/librte_acl/acl_gen.c                      |   2 +-
 lib/librte_acl/acl_run_avx512.c               | 164 ++++++
 lib/librte_acl/acl_run_avx512_common.h        | 477 ++++++++++++++++++
 lib/librte_acl/acl_run_avx512x16.h            | 341 +++++++++++++
 lib/librte_acl/acl_run_avx512x8.h             | 253 ++++++++++
 lib/librte_acl/meson.build                    |  48 ++
 lib/librte_acl/rte_acl.c                      | 212 ++++++--
 lib/librte_acl/rte_acl.h                      |   4 +-
 16 files changed, 1618 insertions(+), 100 deletions(-)
 create mode 100644 lib/librte_acl/acl_run_avx512.c
 create mode 100644 lib/librte_acl/acl_run_avx512_common.h
 create mode 100644 lib/librte_acl/acl_run_avx512x16.h
 create mode 100644 lib/librte_acl/acl_run_avx512x8.h

-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods
  2020-10-05 18:45  3% ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
  2020-10-05 18:45 20%   ` [dpdk-dev] [PATCH v3 03/14] acl: remove of unused enum value Konstantin Ananyev
  2020-10-06 15:03  3%   ` [dpdk-dev] [PATCH v4 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
@ 2020-10-06 15:05  3%   ` David Marchand
  2020-10-06 16:07  3%     ` Ananyev, Konstantin
  2 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-10-06 15:05 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, Jerin Jacob Kollanukkaran,
	Ruifeng Wang (Arm Technology China),
	Vladimir Medvedkin, Thomas Monjalon, Ray Kinsella,
	Bruce Richardson

On Mon, Oct 5, 2020 at 9:44 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> These patch series introduce support of AVX512 specific classify
> implementation for ACL library.
> It adds two new algorithms:
>  - RTE_ACL_CLASSIFY_AVX512X16 - can process up to 16 flows in parallel.
>    It uses 256-bit width instructions/registers only
>    (to avoid frequency level change).
>    On my SKX box test-acl shows ~15-30% improvement
>    (depending on rule-set and input burst size)
>    when switching from AVX2 to AVX512X16 classify algorithms.
>  - RTE_ACL_CLASSIFY_AVX512X32 - can process up to 32 flows in parallel.
>    It uses 512-bit width instructions/registers and provides higher
>    performance then AVX512X16, but can cause frequency level change.
>    On my SKX box test-acl shows ~50-70% improvement
>    (depending on rule-set and input burst size)
>    when switching from AVX2 to AVX512X32 classify algorithms.
>    ICX and CLX testing showed similar level of speedup.
>
> Current AVX512 classify implementation is only supported on x86_64.
> Note that this series introduce a formal ABI incompatibility

The only API change I can see is in rte_acl_classify_alg() new error
code but I don't think we need an announcement for this.
As for ABI, we are breaking it in this release, so I see no pb.


> with previous versions of ACL library.
>
> v2 -> v3:
>   Fix checkpatch warnings
>   Split AVX512 algorithm into two and deduplicate common code

Patch 7 still references a RTE_MACHINE_CPUFLAG flag.
Can you rework now that those flags have been dropped?

Thanks.


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] cryptodev: revert ABI compatibility for ChaCha20-Poly1305
  2020-10-06 12:32  9% ` David Marchand
@ 2020-10-06 14:27  4%   ` Dybkowski, AdamX
  2020-10-07 10:41  4%   ` Doherty, Declan
  1 sibling, 0 replies; 200+ results
From: Dybkowski, AdamX @ 2020-10-06 14:27 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Trahe, Fiona, Akhil Goyal, Kusztal, ArkadiuszX,
	Thomas Monjalon, Ray Kinsella

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, 6 October, 2020 14:32
> To: Dybkowski, AdamX <adamx.dybkowski@intel.com>
> Cc: dev <dev@dpdk.org>; Trahe, Fiona <fiona.trahe@intel.com>; Akhil Goyal
> <akhil.goyal@nxp.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>; Ray Kinsella <mdr@ashroe.eu>
> Subject: Re: [dpdk-dev] [PATCH] cryptodev: revert ABI compatibility for
> ChaCha20-Poly1305
> 
> For the title, I would suggest: "cryptodev: remove v20 ABI compatibility"
> 
> You did this change using a revert, but still, we can avoid restoring coding
> style issues, see nits below.

Thanks for the review, David.
I'll fix these styling issues and send v2 later this week.

Adam Dybkowski



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error packets
  2020-10-06 13:10  0%           ` Nipun Gupta
@ 2020-10-06 13:13  0%             ` Jerin Jacob
  2020-10-08  8:53  0%               ` Nipun Gupta
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-06 13:13 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: Stephen Hemminger, dpdk-dev, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko, Hemant Agrawal, Sachin Saxena, Rohit Raj

On Tue, Oct 6, 2020 at 6:40 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Tuesday, October 6, 2020 5:31 PM
> > To: Nipun Gupta <nipun.gupta@nxp.com>
> > Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
> > <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> > <ferruh.yigit@intel.com>; Andrew Rybchenko <arybchenko@solarflare.com>;
> > Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> > <sachin.saxena@nxp.com>; Rohit Raj <rohit.raj@nxp.com>
> > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> > packets
> >
> > On Tue, Oct 6, 2020 at 4:07 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Monday, October 5, 2020 9:40 PM
> > > > To: Stephen Hemminger <stephen@networkplumber.org>
> > > > Cc: Nipun Gupta <nipun.gupta@nxp.com>; dpdk-dev <dev@dpdk.org>;
> > Thomas
> > > > Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > > Andrew Rybchenko <arybchenko@solarflare.com>; Hemant Agrawal
> > > > <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>;
> > Rohit
> > > > Raj <rohit.raj@nxp.com>
> > > > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> > > > packets
> > > >
> > > > On Mon, Oct 5, 2020 at 9:05 PM Stephen Hemminger
> > > > <stephen@networkplumber.org> wrote:
> > > > >
> > > > > On Mon,  5 Oct 2020 12:45:04 +0530
> > > > > nipun.gupta@nxp.com wrote:
> > > > >
> > > > > > From: Nipun Gupta <nipun.gupta@nxp.com>
> > > > > >
> > > > > > This change adds a RX offload capability, which once enabled,
> > > > > > hardware will drop the packets in case there of any error in
> > > > > > the packet such as L3 checksum error or L4 checksum.
> > > >
> > > > IMO, Providing additional support up to the level to choose the errors
> > > > to drops give more control to the application. Meaning,
> > > > L1 errors such as FCS error
> > > > L2 errors ..
> > > > L3 errors such checksum
> > > > i.e ethdev spec need to have  error level supported by PMD and the
> > > > application can set the layers interested to drop.
> > >
> > > Agree, but 'DEV_RX_OFFLOAD_ERR_PKT_DROP' shall also be there to drop all
> > the
> > > error packets? Maybe we can rename it to
> > DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP.
> >
> > IMHO,  we introduce such shortcut for a single flag for all err drop
> > then we can not change the scheme
> > without an API/ABI break.
>
> Are the following offloads fine:
>         DEV_RX_OFFLOAD_L1_FCS_ERR_PKT_DROP
>         DEV_RX_OFFLOAD_L3_CSUM_ERR_PKT_DROP
>         DEV_RX_OFFLOAD_L4_CSUM_ERR_PKT_DROP
>         DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP
>
> Please let me know in case I need to add any other too.

I think, single offload flags and some config/capability structure to
define the additional
layer selection would be good, instead of adding a lot of new offload flags.


> Ill send a v3.
>
> Thanks,
> Nipun
>
> >
> > >
> > > Currently we have not planned to add separate knobs for separate error in
> > > the driver, maybe we can define them separately, or we need have them in
> > > this series itself?
> >
> > I think, ethdev API can have the capability on what are levels it
> > supported, in your
> > driver case, you can express the same.
> >
> >
> > >
> > > >
> > > > > >
> > > > > > Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> > > > > > Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
> > > > > > ---
> > > > > > These patches are based over series:
> > > > > >
> > > >
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwo
> > > >
> > rk.dpdk.org%2Fpatch%2F78630%2F&amp;data=02%7C01%7Cnipun.gupta%40nx
> > > >
> > p.com%7C90b516fd465c48945e7008d869492b3e%7C686ea1d3bc2b4c6fa92cd9
> > > >
> > 9c5c301635%7C0%7C0%7C637375110263097933&amp;sdata=RBQswMBsfpM6
> > > > nyKur%2FaHvOMvNK7RU%2BRyhHt%2FXBsP1OM%3D&amp;reserved=0
> > > > > >
> > > > > > Changes in v2:
> > > > > >  - Add support in DPAA1 driver (patch 2/3)
> > > > > >  - Add support and config parameter in testpmd (patch 3/3)
> > > > > >
> > > > > >  lib/librte_ethdev/rte_ethdev.h | 1 +
> > > > > >  1 file changed, 1 insertion(+)
> > > > >
> > > > > Maybe this should be an rte_flow match/action which would then make it
> > > > > more flexible?
> > > >
> > > > I think, it is not based on any Patten matching. So IMO, it should be best if it
> > > > is part of RX offload.
> > > >
> > > > >
> > > > > There is not much of a performance gain for this in real life and
> > > > > if only one driver supports it then I am not convinced this is needed.
> > > >
> > > > Marvell HW has this feature.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error packets
  2020-10-06 12:01  3%         ` Jerin Jacob
@ 2020-10-06 13:10  0%           ` Nipun Gupta
  2020-10-06 13:13  0%             ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Nipun Gupta @ 2020-10-06 13:10 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Stephen Hemminger, dpdk-dev, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko, Hemant Agrawal, Sachin Saxena, Rohit Raj



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, October 6, 2020 5:31 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; dpdk-dev
> <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Andrew Rybchenko <arybchenko@solarflare.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> <sachin.saxena@nxp.com>; Rohit Raj <rohit.raj@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> packets
> 
> On Tue, Oct 6, 2020 at 4:07 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Monday, October 5, 2020 9:40 PM
> > > To: Stephen Hemminger <stephen@networkplumber.org>
> > > Cc: Nipun Gupta <nipun.gupta@nxp.com>; dpdk-dev <dev@dpdk.org>;
> Thomas
> > > Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > Andrew Rybchenko <arybchenko@solarflare.com>; Hemant Agrawal
> > > <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>;
> Rohit
> > > Raj <rohit.raj@nxp.com>
> > > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> > > packets
> > >
> > > On Mon, Oct 5, 2020 at 9:05 PM Stephen Hemminger
> > > <stephen@networkplumber.org> wrote:
> > > >
> > > > On Mon,  5 Oct 2020 12:45:04 +0530
> > > > nipun.gupta@nxp.com wrote:
> > > >
> > > > > From: Nipun Gupta <nipun.gupta@nxp.com>
> > > > >
> > > > > This change adds a RX offload capability, which once enabled,
> > > > > hardware will drop the packets in case there of any error in
> > > > > the packet such as L3 checksum error or L4 checksum.
> > >
> > > IMO, Providing additional support up to the level to choose the errors
> > > to drops give more control to the application. Meaning,
> > > L1 errors such as FCS error
> > > L2 errors ..
> > > L3 errors such checksum
> > > i.e ethdev spec need to have  error level supported by PMD and the
> > > application can set the layers interested to drop.
> >
> > Agree, but 'DEV_RX_OFFLOAD_ERR_PKT_DROP' shall also be there to drop all
> the
> > error packets? Maybe we can rename it to
> DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP.
> 
> IMHO,  we introduce such shortcut for a single flag for all err drop
> then we can not change the scheme
> without an API/ABI break.

Are the following offloads fine:
	DEV_RX_OFFLOAD_L1_FCS_ERR_PKT_DROP
	DEV_RX_OFFLOAD_L3_CSUM_ERR_PKT_DROP
	DEV_RX_OFFLOAD_L4_CSUM_ERR_PKT_DROP
	DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP

Please let me know in case I need to add any other too.
Ill send a v3.

Thanks,
Nipun

> 
> >
> > Currently we have not planned to add separate knobs for separate error in
> > the driver, maybe we can define them separately, or we need have them in
> > this series itself?
> 
> I think, ethdev API can have the capability on what are levels it
> supported, in your
> driver case, you can express the same.
> 
> 
> >
> > >
> > > > >
> > > > > Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> > > > > Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
> > > > > ---
> > > > > These patches are based over series:
> > > > >
> > >
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwo
> > >
> rk.dpdk.org%2Fpatch%2F78630%2F&amp;data=02%7C01%7Cnipun.gupta%40nx
> > >
> p.com%7C90b516fd465c48945e7008d869492b3e%7C686ea1d3bc2b4c6fa92cd9
> > >
> 9c5c301635%7C0%7C0%7C637375110263097933&amp;sdata=RBQswMBsfpM6
> > > nyKur%2FaHvOMvNK7RU%2BRyhHt%2FXBsP1OM%3D&amp;reserved=0
> > > > >
> > > > > Changes in v2:
> > > > >  - Add support in DPAA1 driver (patch 2/3)
> > > > >  - Add support and config parameter in testpmd (patch 3/3)
> > > > >
> > > > >  lib/librte_ethdev/rte_ethdev.h | 1 +
> > > > >  1 file changed, 1 insertion(+)
> > > >
> > > > Maybe this should be an rte_flow match/action which would then make it
> > > > more flexible?
> > >
> > > I think, it is not based on any Patten matching. So IMO, it should be best if it
> > > is part of RX offload.
> > >
> > > >
> > > > There is not much of a performance gain for this in real life and
> > > > if only one driver supports it then I am not convinced this is needed.
> > >
> > > Marvell HW has this feature.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] cryptodev: revert ABI compatibility for ChaCha20-Poly1305
  @ 2020-10-06 12:32  9% ` David Marchand
  2020-10-06 14:27  4%   ` Dybkowski, AdamX
  2020-10-07 10:41  4%   ` Doherty, Declan
  2020-10-08  8:32  9% ` [dpdk-dev] [PATCH v2 0/1] cryptodev: remove v20 ABI compatibility Adam Dybkowski
  1 sibling, 2 replies; 200+ results
From: David Marchand @ 2020-10-06 12:32 UTC (permalink / raw)
  To: Adam Dybkowski
  Cc: dev, Trahe, Fiona, Akhil Goyal, Arek Kusztal, Thomas Monjalon,
	Ray Kinsella

For the title, I would suggest: "cryptodev: remove v20 ABI compatibility"

You did this change using a revert, but still, we can avoid restoring
coding style issues, see nits below.


On Fri, Aug 14, 2020 at 12:00 PM Adam Dybkowski
<adamx.dybkowski@intel.com> wrote:
>
> This reverts commit a0f0de06d457753c94688d551a6e8659b4d4e041 as the
> rte_cryptodev_info_get function versioning was a temporary solution
> to maintain ABI compatibility for ChaCha20-Poly1305 and is not
> needed in 20.11.
>
> Fixes: a0f0de06d457 ("cryptodev: fix ABI compatibility for ChaCha20-Poly1305")
>
> Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
> Reviewed-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> ---
>  lib/librte_cryptodev/meson.build              |   1 -
>  lib/librte_cryptodev/rte_cryptodev.c          | 147 +-----------------
>  lib/librte_cryptodev/rte_cryptodev.h          |  34 +---
>  .../rte_cryptodev_version.map                 |   6 -
>  4 files changed, 6 insertions(+), 182 deletions(-)
>
> diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build
> index df1144058..c4c6b3b6a 100644
> --- a/lib/librte_cryptodev/meson.build
> +++ b/lib/librte_cryptodev/meson.build
> @@ -1,7 +1,6 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2017-2019 Intel Corporation
>
> -use_function_versioning = true
>  sources = files('rte_cryptodev.c', 'rte_cryptodev_pmd.c', 'cryptodev_trace_points.c')
>  headers = files('rte_cryptodev.h',
>         'rte_cryptodev_pmd.h',
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
> index 1dd795bcb..6c9a19f25 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -36,8 +36,6 @@
>  #include <rte_errno.h>
>  #include <rte_spinlock.h>
>  #include <rte_string_fns.h>
> -#include <rte_compat.h>
> -#include <rte_function_versioning.h>
>
>  #include "rte_crypto.h"
>  #include "rte_cryptodev.h"
> @@ -59,14 +57,6 @@ static struct rte_cryptodev_global cryptodev_globals = {
>  /* spinlock for crypto device callbacks */
>  static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>
> -static const struct rte_cryptodev_capabilities
> -               cryptodev_undefined_capabilities[] = {
> -               RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
> -};
> -
> -static struct rte_cryptodev_capabilities
> -               *capability_copy[RTE_CRYPTO_MAX_DEVS];
> -static uint8_t is_capability_checked[RTE_CRYPTO_MAX_DEVS];
>

Nit: remove empty line.

>  /**
>   * The user application callback description.
> @@ -291,43 +281,8 @@ rte_crypto_auth_operation_strings[] = {
>                 [RTE_CRYPTO_AUTH_OP_GENERATE]   = "generate"
>  };
>
> -const struct rte_cryptodev_symmetric_capability __vsym *
> -rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
> -               const struct rte_cryptodev_sym_capability_idx *idx)
> -{
> -       const struct rte_cryptodev_capabilities *capability;
> -       struct rte_cryptodev_info dev_info;
> -       int i = 0;
> -
> -       rte_cryptodev_info_get_v20(dev_id, &dev_info);
> -
> -       while ((capability = &dev_info.capabilities[i++])->op !=
> -                       RTE_CRYPTO_OP_TYPE_UNDEFINED) {
> -               if (capability->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
> -                       continue;
> -
> -               if (capability->sym.xform_type != idx->type)
> -                       continue;
> -
> -               if (idx->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
> -                       capability->sym.auth.algo == idx->algo.auth)
> -                       return &capability->sym;
> -
> -               if (idx->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
> -                       capability->sym.cipher.algo == idx->algo.cipher)
> -                       return &capability->sym;
> -
> -               if (idx->type == RTE_CRYPTO_SYM_XFORM_AEAD &&
> -                               capability->sym.aead.algo == idx->algo.aead)
> -                       return &capability->sym;
> -       }
> -
> -       return NULL;
> -}
> -VERSION_SYMBOL(rte_cryptodev_sym_capability_get, _v20, 20.0);
> -
> -const struct rte_cryptodev_symmetric_capability __vsym *
> -rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
> +const struct rte_cryptodev_symmetric_capability *
> +rte_cryptodev_sym_capability_get(uint8_t dev_id,
>                 const struct rte_cryptodev_sym_capability_idx *idx)
>  {
>         const struct rte_cryptodev_capabilities *capability;
> @@ -358,12 +313,8 @@ rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
>         }
>
>         return NULL;
> +

Nit: remove unneeded extra line.


>  }
> -MAP_STATIC_SYMBOL(const struct rte_cryptodev_symmetric_capability *
> -               rte_cryptodev_sym_capability_get(uint8_t dev_id,
> -               const struct rte_cryptodev_sym_capability_idx *idx),
> -               rte_cryptodev_sym_capability_get_v21);
> -BIND_DEFAULT_SYMBOL(rte_cryptodev_sym_capability_get, _v21, 21);
>
>  static int
>  param_range_check(uint16_t size, const struct rte_crypto_param_range *range)
> @@ -1085,12 +1036,6 @@ rte_cryptodev_close(uint8_t dev_id)
>         retval = (*dev->dev_ops->dev_close)(dev);
>         rte_cryptodev_trace_close(dev_id, retval);
>
> -       if (capability_copy[dev_id]) {
> -               free(capability_copy[dev_id]);
> -               capability_copy[dev_id] = NULL;
> -       }
> -       is_capability_checked[dev_id] = 0;
> -
>         if (retval < 0)
>                 return retval;
>
> @@ -1233,61 +1178,9 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
>         (*dev->dev_ops->stats_reset)(dev);
>  }
>
> -static void
> -get_v20_capabilities(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
> -{
> -       const struct rte_cryptodev_capabilities *capability;
> -       uint8_t found_invalid_capa = 0;
> -       uint8_t counter = 0;
> -
> -       for (capability = dev_info->capabilities;
> -                       capability->op != RTE_CRYPTO_OP_TYPE_UNDEFINED;
> -                       ++capability, ++counter) {
> -               if (capability->op == RTE_CRYPTO_OP_TYPE_SYMMETRIC &&
> -                               capability->sym.xform_type ==
> -                                       RTE_CRYPTO_SYM_XFORM_AEAD
> -                               && capability->sym.aead.algo >=
> -                               RTE_CRYPTO_AEAD_CHACHA20_POLY1305) {
> -                       found_invalid_capa = 1;
> -                       counter--;
> -               }
> -       }
> -       is_capability_checked[dev_id] = 1;
> -       if (!found_invalid_capa)
> -               return;
> -       capability_copy[dev_id] = malloc(counter *
> -               sizeof(struct rte_cryptodev_capabilities));
> -       if (capability_copy[dev_id] == NULL) {
> -                /*
> -                 * error case - no memory to store the trimmed
> -                 * list, so have to return an empty list
> -                 */
> -               dev_info->capabilities =
> -                       cryptodev_undefined_capabilities;
> -               is_capability_checked[dev_id] = 0;
> -       } else {
> -               counter = 0;
> -               for (capability = dev_info->capabilities;
> -                               capability->op !=
> -                               RTE_CRYPTO_OP_TYPE_UNDEFINED;
> -                               capability++) {
> -                       if (!(capability->op ==
> -                               RTE_CRYPTO_OP_TYPE_SYMMETRIC
> -                               && capability->sym.xform_type ==
> -                               RTE_CRYPTO_SYM_XFORM_AEAD
> -                               && capability->sym.aead.algo >=
> -                               RTE_CRYPTO_AEAD_CHACHA20_POLY1305)) {
> -                               capability_copy[dev_id][counter++] =
> -                                               *capability;
> -                       }
> -               }
> -               dev_info->capabilities =
> -                               capability_copy[dev_id];
> -       }
> -}

Nit: remove empty line.


>
> -void __vsym
> -rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
> +void
> +rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
>  {
>         struct rte_cryptodev *dev;
>
> @@ -1303,40 +1196,10 @@ rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
>         RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
>         (*dev->dev_ops->dev_infos_get)(dev, dev_info);
>
> -       if (capability_copy[dev_id] == NULL) {
> -               if (!is_capability_checked[dev_id])
> -                       get_v20_capabilities(dev_id, dev_info);
> -       } else
> -               dev_info->capabilities = capability_copy[dev_id];
> -
>         dev_info->driver_name = dev->device->driver->name;
>         dev_info->device = dev->device;
>  }
> -VERSION_SYMBOL(rte_cryptodev_info_get, _v20, 20.0);
>

Nit: remove empty line.


> -void __vsym
> -rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
> -{
> -       struct rte_cryptodev *dev;
> -
> -       if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> -               CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> -               return;
> -       }
> -
> -       dev = &rte_crypto_devices[dev_id];
> -
> -       memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
> -
> -       RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
> -       (*dev->dev_ops->dev_infos_get)(dev, dev_info);
> -
> -       dev_info->driver_name = dev->device->driver->name;
> -       dev_info->device = dev->device;
> -}
> -MAP_STATIC_SYMBOL(void rte_cryptodev_info_get(uint8_t dev_id,
> -       struct rte_cryptodev_info *dev_info), rte_cryptodev_info_get_v21);
> -BIND_DEFAULT_SYMBOL(rte_cryptodev_info_get, _v21, 21);
>
>  int
>  rte_cryptodev_callback_register(uint8_t dev_id,
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index 7b3ebc20f..26abd0c52 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -219,14 +219,6 @@ struct rte_cryptodev_asym_capability_idx {
>   *   - Return NULL if the capability not exist.
>   */
>  const struct rte_cryptodev_symmetric_capability *
> -rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
> -               const struct rte_cryptodev_sym_capability_idx *idx);
> -
> -const struct rte_cryptodev_symmetric_capability *
> -rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
> -               const struct rte_cryptodev_sym_capability_idx *idx);
> -
> -const struct rte_cryptodev_symmetric_capability *
>  rte_cryptodev_sym_capability_get(uint8_t dev_id,
>                 const struct rte_cryptodev_sym_capability_idx *idx);
>
> @@ -789,33 +781,9 @@ rte_cryptodev_stats_reset(uint8_t dev_id);
>   * the last valid element has it's op field set to
>   * RTE_CRYPTO_OP_TYPE_UNDEFINED.
>   */
> -
> -void
> +extern void

Nit: no need for extern.


>  rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
>
> -/* An extra element RTE_CRYPTO_AEAD_CHACHA20_POLY1305 is added
> - * to enum rte_crypto_aead_algorithm, also changing the value of
> - *  RTE_CRYPTO_AEAD_LIST_END. To maintain ABI compatibility with applications
> - * which linked against earlier versions, preventing them, for example, from
> - * picking up the new value and using it to index into an array sized too small
> - * for it, it is necessary to have two versions of rte_cryptodev_info_get()
> - * The latest version just returns directly the capabilities retrieved from
> - * the device. The compatible version inspects the capabilities retrieved
> - * from the device, but only returns them directly if the new value
> - * is not included. If the new value is included, it allocates space
> - * for a copy of the device capabilities, trims the new value from this
> - * and returns this copy. It only needs to do this once per device.
> - * For the corner case of a corner case when the alloc may fail,
> - * an empty capability list is returned, as there is no mechanism to return
> - * an error and adding such a mechanism would itself be an ABI breakage.
> - * The compatible version can be removed after the next major ABI release.
> - */
> -
> -void
> -rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
> -
> -void
> -rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
>

Nit: remove empty line.


>  /**
>   * Register a callback function for specific device id.
> diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
> index 02f6dcf72..7727286ac 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> @@ -58,12 +58,6 @@ DPDK_21 {
>         local: *;
>  };
>
> -DPDK_20.0 {
> -       global:
> -       rte_cryptodev_info_get;
> -       rte_cryptodev_sym_capability_get;
> -};
> -
>  EXPERIMENTAL {
>         global:
>
> --
> 2.25.1
>

Thanks for working on this.
Note to others watching ABI, with this, it should be the last patch
about DPDK_20 ABI.


-- 
David Marchand


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH V5 2/2] ethdev: change data type in TC rxq and TC txq
  2020-10-05 12:26  0%       ` Ferruh Yigit
@ 2020-10-06 12:04  0%         ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-10-06 12:04 UTC (permalink / raw)
  To: Thomas Monjalon, Min Hu (Connor)
  Cc: techboard, stephen, bruce.richardson, jerinj, dev

On 10/5/2020 1:26 PM, Ferruh Yigit wrote:
> On 9/28/2020 10:21 AM, Thomas Monjalon wrote:
>> 28/09/2020 11:04, Ferruh Yigit:
>>> On 9/27/2020 4:16 AM, Min Hu (Connor) wrote:
>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>
>>>> Currently, base and nb_queue in the tc_rxq and tc_txq information
>>>> of queue and TC mapping on both TX and RX paths are uint8_t.
>>>> However, these data will be truncated when queue number under a TC
>>>> is greater than 256. So it is necessary for base and nb_queue to
>>>> change from uint8_t to uint16_t.
>> [...]
>>>> --- a/lib/librte_ethdev/rte_ethdev.h
>>>> +++ b/lib/librte_ethdev/rte_ethdev.h
>>>>    struct rte_eth_dcb_tc_queue_mapping {
>>>>        /** rx queues assigned to tc per Pool */
>>>>        struct {
>>>> -        uint8_t base;
>>>> -        uint8_t nb_queue;
>>>> +        uint16_t base;
>>>> +        uint16_t nb_queue;
>>>>        } tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
>>>>        /** rx queues assigned to tc per Pool */
>>>>        struct {
>>>> -        uint8_t base;
>>>> -        uint8_t nb_queue;
>>>> +        uint16_t base;
>>>> +        uint16_t nb_queue;
>>>>        } tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
>>>>    };
>>>>
>>>
>>> cc'ed tech-board,
>>>
>>> The patch breaks the ethdev ABI without a deprecation notice from previous
>>> release(s).
>>>
>>> It is increasing the storage size of the fields to support more than 255 queues.
>>
>> Yes queues are in 16-bit range.
>>
>>> Since the ethdev library already heavily breaks the ABI this release, I am for
>>> getting this patch, instead of waiting for one more year for the update.
>>>
>>> Can you please review the patch, is there any objection to proceed with it?
>>
>> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>>
>>
> 
> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> I will continue with this patch (not patchset) if there is no objection.
 >

Applied to dpdk-next-net/main, thanks.

Only this patch in the patchset merged, discussion is going on the 1/2 one, 
since the issues are separate, it can continue on its own.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error packets
  @ 2020-10-06 12:01  3%         ` Jerin Jacob
  2020-10-06 13:10  0%           ` Nipun Gupta
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-06 12:01 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: Stephen Hemminger, dpdk-dev, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko, Hemant Agrawal, Sachin Saxena, Rohit Raj

On Tue, Oct 6, 2020 at 4:07 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Monday, October 5, 2020 9:40 PM
> > To: Stephen Hemminger <stephen@networkplumber.org>
> > Cc: Nipun Gupta <nipun.gupta@nxp.com>; dpdk-dev <dev@dpdk.org>; Thomas
> > Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > Andrew Rybchenko <arybchenko@solarflare.com>; Hemant Agrawal
> > <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>; Rohit
> > Raj <rohit.raj@nxp.com>
> > Subject: Re: [dpdk-dev] [PATCH 1/3 v2] ethdev: add rx offload to drop error
> > packets
> >
> > On Mon, Oct 5, 2020 at 9:05 PM Stephen Hemminger
> > <stephen@networkplumber.org> wrote:
> > >
> > > On Mon,  5 Oct 2020 12:45:04 +0530
> > > nipun.gupta@nxp.com wrote:
> > >
> > > > From: Nipun Gupta <nipun.gupta@nxp.com>
> > > >
> > > > This change adds a RX offload capability, which once enabled,
> > > > hardware will drop the packets in case there of any error in
> > > > the packet such as L3 checksum error or L4 checksum.
> >
> > IMO, Providing additional support up to the level to choose the errors
> > to drops give more control to the application. Meaning,
> > L1 errors such as FCS error
> > L2 errors ..
> > L3 errors such checksum
> > i.e ethdev spec need to have  error level supported by PMD and the
> > application can set the layers interested to drop.
>
> Agree, but 'DEV_RX_OFFLOAD_ERR_PKT_DROP' shall also be there to drop all the
> error packets? Maybe we can rename it to DEV_RX_OFFLOAD_ALL_ERR_PKT_DROP.

IMHO,  we introduce such shortcut for a single flag for all err drop
then we can not change the scheme
without an API/ABI break.

>
> Currently we have not planned to add separate knobs for separate error in
> the driver, maybe we can define them separately, or we need have them in
> this series itself?

I think, ethdev API can have the capability on what are levels it
supported, in your
driver case, you can express the same.


>
> >
> > > >
> > > > Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> > > > Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
> > > > ---
> > > > These patches are based over series:
> > > >
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwo
> > rk.dpdk.org%2Fpatch%2F78630%2F&amp;data=02%7C01%7Cnipun.gupta%40nx
> > p.com%7C90b516fd465c48945e7008d869492b3e%7C686ea1d3bc2b4c6fa92cd9
> > 9c5c301635%7C0%7C0%7C637375110263097933&amp;sdata=RBQswMBsfpM6
> > nyKur%2FaHvOMvNK7RU%2BRyhHt%2FXBsP1OM%3D&amp;reserved=0
> > > >
> > > > Changes in v2:
> > > >  - Add support in DPAA1 driver (patch 2/3)
> > > >  - Add support and config parameter in testpmd (patch 3/3)
> > > >
> > > >  lib/librte_ethdev/rte_ethdev.h | 1 +
> > > >  1 file changed, 1 insertion(+)
> > >
> > > Maybe this should be an rte_flow match/action which would then make it
> > > more flexible?
> >
> > I think, it is not based on any Patten matching. So IMO, it should be best if it
> > is part of RX offload.
> >
> > >
> > > There is not much of a performance gain for this in real life and
> > > if only one driver supports it then I am not convinced this is needed.
> >
> > Marvell HW has this feature.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI
  2020-10-06  7:07  7% [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI Olivier Matz
                   ` (2 preceding siblings ...)
  2020-10-06  9:52  4% ` David Marchand
@ 2020-10-06 11:57  4% ` David Marchand
  3 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-10-06 11:57 UTC (permalink / raw)
  To: Olivier Matz
  Cc: dev, Andrew Rybchenko, Ray Kinsella, Neil Horman, Bruce Richardson

On Tue, Oct 6, 2020 at 9:08 AM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> Remove the deprecated v20 ABI of rte_mempool_populate_iova() and
> rte_mempool_populate_virt().
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>

Series applied, thanks Olivier.


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 3/3] crypto/aesni_mb: support Chacha20-Poly1305
  @ 2020-10-06 10:59  4% ` Pablo de Lara
  0 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2020-10-06 10:59 UTC (permalink / raw)
  To: declan.doherty; +Cc: dev, Pablo de Lara

Add support for Chacha20-Poly1305 AEAD algorithm.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 doc/guides/cryptodevs/aesni_mb.rst            |  1 +
 doc/guides/cryptodevs/features/aesni_mb.ini   | 10 +--
 doc/guides/rel_notes/release_20_11.rst        |  3 +
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c    | 63 ++++++++++++++++---
 .../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c    | 32 ++++++++++
 5 files changed, 97 insertions(+), 12 deletions(-)

diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
index 15388d20a..cf7ad5d57 100644
--- a/doc/guides/cryptodevs/aesni_mb.rst
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -54,6 +54,7 @@ AEAD algorithms:
 
 * RTE_CRYPTO_AEAD_AES_CCM
 * RTE_CRYPTO_AEAD_AES_GCM
+* RTE_CRYPTO_AEAD_CHACHA20_POLY1305
 
 Protocol offloads:
 
diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index 38d255aff..2e8305709 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -54,11 +54,11 @@ AES GMAC     = Y
 ; Supported AEAD algorithms of the 'aesni_mb' crypto driver.
 ;
 [AEAD]
-AES CCM (128) = Y
-AES GCM (128) = Y
-AES GCM (192) = Y
-AES GCM (256) = Y
-
+AES CCM (128)     = Y
+AES GCM (128)     = Y
+AES GCM (192)     = Y
+AES GCM (256)     = Y
+CHACHA20-POLY1305 = Y
 ;
 ; Supported Asymmetric algorithms of the 'aesni_mb' crypto driver.
 ;
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6d8c24413..f606c9a74 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -210,6 +210,9 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* **Updated the AESNI MB crypto PMD.**
+
+  * Added support for Chacha20-Poly1305.
 
 ABI Changes
 -----------
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index fa364530e..7b4d5f148 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -125,6 +125,18 @@ aesni_mb_get_chain_order(const struct rte_crypto_sym_xform *xform)
 	return AESNI_MB_OP_NOT_SUPPORTED;
 }
 
+static inline int
+is_aead_algo(JOB_HASH_ALG hash_alg, JOB_CIPHER_MODE cipher_mode)
+{
+#if IMB_VERSION(0, 54, 3) <= IMB_VERSION_NUM
+	return (hash_alg == IMB_AUTH_CHACHA20_POLY1305 || hash_alg == AES_CCM ||
+		(hash_alg == AES_GMAC && cipher_mode == GCM));
+#else
+	return ((hash_alg == AES_GMAC && cipher_mode == GCM) ||
+		hash_alg == AES_CCM);
+#endif
+}
+
 /** Set session authentication parameters */
 static int
 aesni_mb_set_session_auth_parameters(const MB_MGR *mb_mgr,
@@ -624,6 +636,24 @@ aesni_mb_set_session_aead_parameters(const MB_MGR *mb_mgr,
 		}
 		break;
 
+#if IMB_VERSION(0, 54, 3) <= IMB_VERSION_NUM
+	case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+		sess->cipher.mode = IMB_CIPHER_CHACHA20_POLY1305;
+		sess->auth.algo = IMB_AUTH_CHACHA20_POLY1305;
+
+		if (xform->aead.key.length != 32) {
+			AESNI_MB_LOG(ERR, "Invalid key length");
+			return -EINVAL;
+		}
+		sess->cipher.key_length_in_bytes = 32;
+		memcpy(sess->cipher.expanded_aes_keys.encode,
+			xform->aead.key.data, 32);
+		if (sess->auth.req_digest_len != 16) {
+			AESNI_MB_LOG(ERR, "Invalid digest size\n");
+			return -EINVAL;
+		}
+		break;
+#endif
 	default:
 		AESNI_MB_LOG(ERR, "Unsupported aead mode parameter");
 		return -ENOTSUP;
@@ -1122,6 +1152,8 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
 	/* Set authentication parameters */
 	job->hash_alg = session->auth.algo;
 
+	const int aead = is_aead_algo(job->hash_alg, job->cipher_mode);
+
 	switch (job->hash_alg) {
 	case AES_XCBC:
 		job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded;
@@ -1168,6 +1200,14 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
 		job->aes_dec_key_expanded = &session->cipher.gcm_key;
 		break;
 
+#if IMB_VERSION(0, 54, 3) <= IMB_VERSION_NUM
+	case IMB_AUTH_CHACHA20_POLY1305:
+		job->u.CHACHA20_POLY1305.aad = op->sym->aead.aad.data;
+		job->u.CHACHA20_POLY1305.aad_len_in_bytes = session->aead.aad_len;
+		job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode;
+		job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.encode;
+		break;
+#endif
 	default:
 		job->u.HMAC._hashed_auth_key_xor_ipad = session->auth.pads.inner;
 		job->u.HMAC._hashed_auth_key_xor_opad = session->auth.pads.outer;
@@ -1199,8 +1239,7 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
 		oop = 1;
 	}
 
-	if (job->hash_alg == AES_CCM || (job->hash_alg == AES_GMAC &&
-			session->cipher.mode == GCM))
+	if (aead)
 		m_offset = op->sym->aead.data.offset;
 	else
 		m_offset = op->sym->cipher.data.offset;
@@ -1211,8 +1250,7 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
 		job->auth_tag_output = qp->temp_digests[*digest_idx];
 		*digest_idx = (*digest_idx + 1) % MAX_JOBS;
 	} else {
-		if (job->hash_alg == AES_CCM || (job->hash_alg == AES_GMAC &&
-				session->cipher.mode == GCM))
+		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
 		else
 			job->auth_tag_output = op->sym->auth.digest.data;
@@ -1272,6 +1310,19 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
 				session->iv.offset);
 		break;
 
+#if IMB_VERSION(0, 54, 3) <= IMB_VERSION_NUM
+	case IMB_AUTH_CHACHA20_POLY1305:
+		job->cipher_start_src_offset_in_bytes = op->sym->aead.data.offset;
+		job->hash_start_src_offset_in_bytes = op->sym->aead.data.offset;
+		job->msg_len_to_cipher_in_bytes =
+				op->sym->aead.data.length;
+		job->msg_len_to_hash_in_bytes =
+					op->sym->aead.data.length;
+
+		job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+				session->iv.offset);
+		break;
+#endif
 	default:
 		job->cipher_start_src_offset_in_bytes =
 				op->sym->cipher.data.offset;
@@ -1462,9 +1513,7 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
 				break;
 
 			if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
-				if (job->hash_alg == AES_CCM ||
-					(job->hash_alg == AES_GMAC &&
-						sess->cipher.mode == GCM))
+				if (is_aead_algo(job->hash_alg, sess->cipher.mode))
 					verify_digest(job,
 						op->sym->aead.digest.data,
 						sess->auth.req_digest_len,
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index 3e4282954..3089b0ca4 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -497,6 +497,38 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
 			}, }
 		}, }
 	},
+#if IMB_VERSION(0, 54, 3) <= IMB_VERSION_NUM
+	{	/* CHACHA20-POLY1305 */					\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
+			{.aead = {					\
+				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
+				.block_size = 64,			\
+				.key_size = {				\
+					.min = 32,			\
+					.max = 32,			\
+					.increment = 0			\
+				},					\
+				.digest_size = {			\
+					.min = 16,			\
+					.max = 16,			\
+					.increment = 0			\
+				},					\
+				.aad_size = {				\
+					.min = 0,			\
+					.max = 240,			\
+					.increment = 1			\
+				},					\
+				.iv_size = {				\
+					.min = 12,			\
+					.max = 12,			\
+					.increment = 0			\
+				},					\
+			}, }						\
+		}, }							\
+	},
+#endif
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH] crypto/aesni_mb: support AES-CCM-256
@ 2020-10-06 10:43  4% Pablo de Lara
  0 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2020-10-06 10:43 UTC (permalink / raw)
  To: declan.doherty; +Cc: dev, Pablo de Lara

This patch adds support for AES-CCM-256 when using AESNI-MB

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 doc/guides/cryptodevs/features/aesni_mb.ini    | 1 +
 doc/guides/rel_notes/release_20_11.rst         | 4 ++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 5 +++++
 3 files changed, 10 insertions(+)

diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index 38d255aff..58afb203e 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -55,6 +55,7 @@ AES GMAC     = Y
 ;
 [AEAD]
 AES CCM (128) = Y
+AES CCM (256) = Y
 AES GCM (128) = Y
 AES GCM (192) = Y
 AES GCM (256) = Y
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6d8c24413..6a2d000d3 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -210,6 +210,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* **Updated the AESNI MB crypto PMD.**
+
+  * Updated the AESNI MB PMD with AES-256 CCM algorithm.
+
 
 ABI Changes
 -----------
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index 2362f0c3c..7759a9873 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -400,8 +400,13 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
 				.block_size = 16,
 				.key_size = {
 					.min = 16,
+#if IMB_VERSION(0, 54, 2) <= IMB_VERSION_NUM
+					.max = 32,
+					.increment = 16
+#else
 					.max = 16,
 					.increment = 0
+#endif
 				},
 				.digest_size = {
 					.min = 4,
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI
  2020-10-06  7:07  7% [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI Olivier Matz
  2020-10-06  7:07  7% ` [dpdk-dev] [PATCH 2/2] mempool: remove experimental tags Olivier Matz
  2020-10-06  8:15  4% ` [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI Bruce Richardson
@ 2020-10-06  9:52  4% ` David Marchand
  2020-10-06 11:57  4% ` David Marchand
  3 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-10-06  9:52 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Andrew Rybchenko, Ray Kinsella, Neil Horman

On Tue, Oct 6, 2020 at 9:08 AM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> Remove the deprecated v20 ABI of rte_mempool_populate_iova() and
> rte_mempool_populate_virt().
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  lib/librte_mempool/meson.build             |  2 -
>  lib/librte_mempool/rte_mempool.c           | 79 ++--------------------
>  lib/librte_mempool/rte_mempool_version.map |  7 --
>  3 files changed, 5 insertions(+), 83 deletions(-)
>
> diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
> index 7dbe6b9bea..a6e861cbfc 100644
> --- a/lib/librte_mempool/meson.build
> +++ b/lib/librte_mempool/meson.build
> @@ -9,8 +9,6 @@ foreach flag: extra_flags
>         endif
>  endforeach
>
> -use_function_versioning = true
> -
>  sources = files('rte_mempool.c', 'rte_mempool_ops.c',
>                 'rte_mempool_ops_default.c', 'mempool_trace_points.c')
>  headers = files('rte_mempool.h', 'rte_mempool_trace.h',
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 7774f0c8da..0e3a2a7635 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -30,7 +30,6 @@
>  #include <rte_string_fns.h>
>  #include <rte_spinlock.h>
>  #include <rte_tailq.h>
> -#include <rte_function_versioning.h>
>  #include <rte_eal_paging.h>
>
>
> @@ -305,17 +304,12 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
>         return 0;
>  }
>
> -__vsym int
> -rte_mempool_populate_iova_v21(struct rte_mempool *mp, char *vaddr,
> -       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> -       void *opaque);
> -
>  /* Add objects in the pool, using a physically contiguous memory
>   * zone. Return the number of objects added, or a negative value
>   * on error.
>   */
> -__vsym int
> -rte_mempool_populate_iova_v21(struct rte_mempool *mp, char *vaddr,
> +int
> +rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>         void *opaque)
>  {
> @@ -375,35 +369,6 @@ rte_mempool_populate_iova_v21(struct rte_mempool *mp, char *vaddr,
>         return ret;
>  }
>
> -BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v21, 21);
> -MAP_STATIC_SYMBOL(
> -       int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> -                               rte_iova_t iova, size_t len,
> -                               rte_mempool_memchunk_free_cb_t *free_cb,
> -                               void *opaque),
> -       rte_mempool_populate_iova_v21);
> -
> -__vsym int
> -rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
> -       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> -       void *opaque);
> -
> -__vsym int
> -rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
> -       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> -       void *opaque)
> -{
> -       int ret;
> -
> -       ret = rte_mempool_populate_iova_v21(mp, vaddr, iova, len, free_cb,
> -                                       opaque);
> -       if (ret == 0)
> -               ret = -EINVAL;
> -
> -       return ret;
> -}
> -VERSION_SYMBOL(rte_mempool_populate_iova, _v20, 20.0);
> -
>  static rte_iova_t
>  get_iova(void *addr)
>  {
> @@ -417,16 +382,11 @@ get_iova(void *addr)
>         return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
>  }
>
> -__vsym int
> -rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
> -       size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> -       void *opaque);
> -
>  /* Populate the mempool with a virtual area. Return the number of
>   * objects added, or a negative value on error.
>   */
> -__vsym int
> -rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
> +int
> +rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>         size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
>         void *opaque)
>  {
> @@ -459,7 +419,7 @@ rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
>                                 break;
>                 }
>
> -               ret = rte_mempool_populate_iova_v21(mp, addr + off, iova,
> +               ret = rte_mempool_populate_iova(mp, addr + off, iova,
>                         phys_len, free_cb, opaque);
>                 if (ret == 0)
>                         continue;
> @@ -477,35 +437,6 @@ rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
>         rte_mempool_free_memchunks(mp);
>         return ret;
>  }
> -BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v21, 21);
> -MAP_STATIC_SYMBOL(
> -       int rte_mempool_populate_virt(struct rte_mempool *mp,
> -                               char *addr, size_t len, size_t pg_sz,
> -                               rte_mempool_memchunk_free_cb_t *free_cb,
> -                               void *opaque),
> -       rte_mempool_populate_virt_v21);
> -
> -__vsym int
> -rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
> -       size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> -       void *opaque);
> -
> -__vsym int
> -rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
> -       size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> -       void *opaque)
> -{
> -       int ret;
> -
> -       ret = rte_mempool_populate_virt_v21(mp, addr, len, pg_sz,
> -                                               free_cb, opaque);
> -
> -       if (ret == 0)
> -               ret = -EINVAL;
> -
> -       return ret;
> -}
> -VERSION_SYMBOL(rte_mempool_populate_virt, _v20, 20.0);
>
>  /* Get the minimal page size used in a mempool before populating it. */
>  int
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index 50e22ee020..83760ecfc9 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -31,13 +31,6 @@ DPDK_21 {
>         local: *;
>  };
>
> -DPDK_20.0 {
> -       global:
> -
> -       rte_mempool_populate_iova;
> -       rte_mempool_populate_virt;
> -};
> -
>  EXPERIMENTAL {
>         global:
>
> --
> 2.25.1
>

For the series,
Reviewed-by: David Marchand <david.marchand@redhat.com>


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH V5 1/2] dpdk: resolve compiling errors for per-queue stats
  2020-10-05 12:23  0%         ` Ferruh Yigit
@ 2020-10-06  8:33  0%           ` Olivier Matz
  2020-10-09 20:32  0%             ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2020-10-06  8:33 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Stephen Hemminger, Thomas Monjalon, Min Hu (Connor),
	techboard, bruce.richardson, jerinj, Ray Kinsella, dev

Hi,

On Mon, Oct 05, 2020 at 01:23:08PM +0100, Ferruh Yigit wrote:
> On 9/28/2020 4:43 PM, Stephen Hemminger wrote:
> > On Mon, 28 Sep 2020 17:24:26 +0200
> > Thomas Monjalon <thomas@monjalon.net> wrote:
> > 
> > > 28/09/2020 15:53, Ferruh Yigit:
> > > > On 9/28/2020 10:16 AM, Thomas Monjalon wrote:
> > > > > 28/09/2020 10:59, Ferruh Yigit:
> > > > > > On 9/27/2020 4:16 AM, Min Hu (Connor) wrote:
> > > > > > > From: Huisong Li <lihuisong@huawei.com>
> > > > > > > 
> > > > > > > Currently, only statistics of rx/tx queues with queue_id less than
> > > > > > > RTE_ETHDEV_QUEUE_STAT_CNTRS can be displayed. If there is a certain
> > > > > > > application scenario that it needs to use 256 or more than 256 queues
> > > > > > > and display all statistics of rx/tx queue. At this moment, we have to
> > > > > > > change the macro to be equaled to the queue number.
> > > > > > > 
> > > > > > > However, modifying the macro to be greater than 256 will trigger
> > > > > > > many errors and warnings from test-pmd, PMD drivers and librte_ethdev
> > > > > > > during compiling dpdk project. But it is possible and permitted that
> > > > > > > rx/tx queue number is greater than 256 and all statistics of rx/tx
> > > > > > > queue need to be displayed. In addition, the data type of rx/tx queue
> > > > > > > number in rte_eth_dev_configure API is 'uint16_t'. So It is unreasonable
> > > > > > > to use the 'uint8_t' type for variables that control which per-queue
> > > > > > > statistics can be displayed.
> > > > > 
> > > > > The explanation is too much complex and misleading.
> > > > > You mean you cannot increase RTE_ETHDEV_QUEUE_STAT_CNTRS
> > > > > above 256 because it is an 8-bit type?
> > > > > 
> > > > > [...]
> > > > > > > --- a/lib/librte_ethdev/rte_ethdev.h
> > > > > > > +++ b/lib/librte_ethdev/rte_ethdev.h
> > > > > > >     int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id,
> > > > > > > -		uint16_t tx_queue_id, uint8_t stat_idx);
> > > > > > > +		uint16_t tx_queue_id, uint16_t stat_idx);
> > > > > [...]
> > > > > > >     int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
> > > > > > >     					   uint16_t rx_queue_id,
> > > > > > > -					   uint8_t stat_idx);
> > > > > > > +					   uint16_t stat_idx);
> > > > > [...]
> > > > > > cc'ed tech-board,
> > > > > > 
> > > > > > The patch breaks the ethdev ABI without a deprecation notice from previous
> > > > > > release(s).
> > > > > > 
> > > > > > It is mainly a fix to the port_id storage type, which we have updated from
> > > > > > uint8_t to uint16_t in past but some seems remained for
> > > > > > 'rte_eth_dev_set_tx_queue_stats_mapping()' &
> > > > > > 'rte_eth_dev_set_rx_queue_stats_mapping()' APIs.
> > > > > 
> > > > > No, it is not related to the port id, but the number of limited stats.
> > > > 
> > > > Right, it is not related to the port id, it is fixing the storage type for index
> > > > used to map the queue stats.
> > > > > > Since the ethdev library already heavily breaks the ABI this release, I am for
> > > > > > getting this fix, instead of waiting the fix for one more year.
> > > > > 
> > > > > If stats can be managed for more than 256 queues, I think it means
> > > > > it is not limited. In this case, we probably don't need the API
> > > > > *_queue_stats_mapping which was invented for a limitation of ixgbe.
> > > > > 
> > > > > The problem is probably somewhere else (in testpmd),
> > > > > that's why I am against this patch.
> > > > 
> > > > This patch is not to fix queue stats mapping, I agree there are problems related
> > > > to it, already shared as comment to this set.
> > > > 
> > > > But this patch is to fix the build errors when 'RTE_ETHDEV_QUEUE_STAT_CNTRS'
> > > > needs to set more than 255. Where the build errors seems around the
> > > > stats_mapping APIs.
> > > 
> > > It is not said this API is supposed to manage more than 256 queues mapping.
> > > In general we should not need this API.
> > > I think it is solving the wrong problem.
> > 
> > 
> > The original API is a band aid for the limited number of statistics counters
> > in the Intel IXGBE hardware. It crept into to the DPDK as an API. I would rather
> > have per-queue statistics and make ixgbe say "not supported"
> > 
> 
> The current issue is not directly related to '*_queue_stats_mapping' APIs.
> 
> Problem is not able to set 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255.
> User may need to set the 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, since it is
> used to define size of the stats counter.
> "uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];"
> 
> When 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, it gives multiple build errors,
> the one in the ethdev is like [1].
> 
> This can be fixed two ways,
> a) increase the size of 'stat_idx' storage type to u16 in the
> '*_queue_stats_mapping' APIs, this is what this patch does.
> b) Fix with a casting in the comparison, without changing the APIs.
> 
> I think both are OK, but is (b) more preferable?

I think the patch (a) is ok, knowing that RTE_ETHDEV_QUEUE_STAT_CNTRS is
not modified.

On the substance, I agree with Thomas that the queue_stats_mapping API
should be replaced by xstats.


> 
> 
> [1]
> ../lib/librte_ethdev/rte_ethdev.c: In function ‘set_queue_stats_mapping’:
> ../lib/librte_ethdev/rte_ethdev.c:2943:15: warning: comparison is always
> false due to limited range of data type [-Wtype-limits]
>  2943 |  if (stat_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)
>       |               ^~

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI
  2020-10-05 20:27  6% ` [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
@ 2020-10-06  8:26  4%   ` Van Haaren, Harry
  2020-10-12 19:09  4%     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 200+ results
From: Van Haaren, Harry @ 2020-10-06  8:26 UTC (permalink / raw)
  To: McDaniel, Timothy, Jerin Jacob, Kovacevic, Marko, Ori Kam,
	Richardson, Bruce, Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, Carrillo, Erik G, Eads, Gage, hemant.agrawal

> -----Original Message-----
> From: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Sent: Monday, October 5, 2020 9:28 PM
> To: Jerin Jacob <jerinj@marvell.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>;
> Ori Kam <orika@mellanox.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>; Akhil
> Goyal <akhil.goyal@nxp.com>; Kantecki, Tomasz <tomasz.kantecki@intel.com>;
> Sunil Kumar Kori <skori@marvell.com>; Pavan Nikhilesh
> <pbhagavatula@marvell.com>
> Cc: dev@dpdk.org; Carrillo, Erik G <erik.g.carrillo@intel.com>; Eads, Gage
> <gage.eads@intel.com>; hemant.agrawal@nxp.com
> Subject: [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI
> 
> Several data structures and constants changed, or were added,
> in the previous patch.  This commit updates the dependent
> apps and examples to use the new ABI.
> 
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---

With this patch applied, the compilation works fine, however runtime fails.
Note that there is a dependency to the following fix which Timothy upstreamed: 
http://patches.dpdk.org/patch/79713/

The above linked patch increases the CTF trace size, and fixes the following error:
./dpdk-test
EAL: __rte_trace_point_emit_field():442 CTF field is too long
EAL: __rte_trace_point_register():468 missing rte_trace_emit_header() in register fn


>  app/test-eventdev/evt_common.h                     | 11 ++++++++
>  app/test-eventdev/test_order_atq.c                 | 28 +++++++++++++++------
>  app/test-eventdev/test_order_common.c              |  1 +
>  app/test-eventdev/test_order_queue.c               | 29 ++++++++++++++++------
>  app/test/test_eventdev.c                           |  4 +--
>  .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +++--
>  examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
>  examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++++--
>  examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +++--
>  examples/l3fwd/l3fwd_event_generic.c               |  7 ++++--
>  examples/l3fwd/l3fwd_event_internal_port.c         |  6 +++--
>  11 files changed, 80 insertions(+), 26 deletions(-)
> 
> diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
> index f9d7378..a1da1cf 100644
> --- a/app/test-eventdev/evt_common.h
> +++ b/app/test-eventdev/evt_common.h
> @@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
>  			true : false;
>  }
> 
> +static inline bool
> +evt_has_flow_id(uint8_t dev_id)
> +{
> +	struct rte_event_dev_info dev_info;
> +
> +	rte_event_dev_info_get(dev_id, &dev_info);
> +	return (dev_info.event_dev_cap &
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
> +			true : false;
> +}
> +
>  static inline int
>  evt_service_setup(uint32_t service_id)
>  {
> @@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t
> nb_queues,
>  			.dequeue_timeout_ns = opt->deq_tmo_nsec,
>  			.nb_event_queues = nb_queues,
>  			.nb_event_ports = nb_ports,
> +			.nb_single_link_event_port_queues = 0,
>  			.nb_events_limit  = info.max_num_events,
>  			.nb_event_queue_flows = opt->nb_flows,
>  			.nb_event_port_dequeue_depth =
> diff --git a/app/test-eventdev/test_order_atq.c b/app/test-
> eventdev/test_order_atq.c
> index 3366cfc..cfcb1dc 100644
> --- a/app/test-eventdev/test_order_atq.c
> +++ b/app/test-eventdev/test_order_atq.c
> @@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
>  }
> 
>  static int
> -order_atq_worker(void *arg)
> +order_atq_worker(void *arg, const bool flow_id_cap)
>  {
>  	ORDER_WORKER_INIT;
>  	struct rte_event ev;
> @@ -34,6 +34,9 @@ order_atq_worker(void *arg)
>  			continue;
>  		}
> 
> +		if (!flow_id_cap)
> +			ev.flow_id = ev.mbuf->udata64;
> +
>  		if (ev.sub_event_type == 0) { /* stage 0 from producer */
>  			order_atq_process_stage_0(&ev);
>  			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -50,7 +53,7 @@ order_atq_worker(void *arg)
>  }
> 
>  static int
> -order_atq_worker_burst(void *arg)
> +order_atq_worker_burst(void *arg, const bool flow_id_cap)
>  {
>  	ORDER_WORKER_INIT;
>  	struct rte_event ev[BURST_SIZE];
> @@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
>  		}
> 
>  		for (i = 0; i < nb_rx; i++) {
> +			if (!flow_id_cap)
> +				ev[i].flow_id = ev[i].mbuf->udata64;
> +
>  			if (ev[i].sub_event_type == 0) { /*stage 0 */
>  				order_atq_process_stage_0(&ev[i]);
>  			} else if (ev[i].sub_event_type == 1) { /* stage 1 */
> @@ -95,11 +101,19 @@ worker_wrapper(void *arg)
>  {
>  	struct worker_data *w  = arg;
>  	const bool burst = evt_has_burst_mode(w->dev_id);
> -
> -	if (burst)
> -		return order_atq_worker_burst(arg);
> -	else
> -		return order_atq_worker(arg);
> +	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
> +
> +	if (burst) {
> +		if (flow_id_cap)
> +			return order_atq_worker_burst(arg, true);
> +		else
> +			return order_atq_worker_burst(arg, false);
> +	} else {
> +		if (flow_id_cap)
> +			return order_atq_worker(arg, true);
> +		else
> +			return order_atq_worker(arg, false);
> +	}
>  }
> 
>  static int
> diff --git a/app/test-eventdev/test_order_common.c b/app/test-
> eventdev/test_order_common.c
> index 4190f9a..7942390 100644
> --- a/app/test-eventdev/test_order_common.c
> +++ b/app/test-eventdev/test_order_common.c
> @@ -49,6 +49,7 @@ order_producer(void *arg)
>  		const uint32_t flow = (uintptr_t)m % nb_flows;
>  		/* Maintain seq number per flow */
>  		m->seqn = producer_flow_seq[flow]++;
> +		m->udata64 = flow;
> 
>  		ev.flow_id = flow;
>  		ev.mbuf = m;
> diff --git a/app/test-eventdev/test_order_queue.c b/app/test-
> eventdev/test_order_queue.c
> index 495efd9..1511c00 100644
> --- a/app/test-eventdev/test_order_queue.c
> +++ b/app/test-eventdev/test_order_queue.c
> @@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
>  }
> 
>  static int
> -order_queue_worker(void *arg)
> +order_queue_worker(void *arg, const bool flow_id_cap)
>  {
>  	ORDER_WORKER_INIT;
>  	struct rte_event ev;
> @@ -34,6 +34,9 @@ order_queue_worker(void *arg)
>  			continue;
>  		}
> 
> +		if (!flow_id_cap)
> +			ev.flow_id = ev.mbuf->udata64;
> +
>  		if (ev.queue_id == 0) { /* from ordered queue */
>  			order_queue_process_stage_0(&ev);
>  			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -50,7 +53,7 @@ order_queue_worker(void *arg)
>  }
> 
>  static int
> -order_queue_worker_burst(void *arg)
> +order_queue_worker_burst(void *arg, const bool flow_id_cap)
>  {
>  	ORDER_WORKER_INIT;
>  	struct rte_event ev[BURST_SIZE];
> @@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
>  		}
> 
>  		for (i = 0; i < nb_rx; i++) {
> +
> +			if (!flow_id_cap)
> +				ev[i].flow_id = ev[i].mbuf->udata64;
> +
>  			if (ev[i].queue_id == 0) { /* from ordered queue */
>  				order_queue_process_stage_0(&ev[i]);
>  			} else if (ev[i].queue_id == 1) {/* from atomic queue */
> @@ -95,11 +102,19 @@ worker_wrapper(void *arg)
>  {
>  	struct worker_data *w  = arg;
>  	const bool burst = evt_has_burst_mode(w->dev_id);
> -
> -	if (burst)
> -		return order_queue_worker_burst(arg);
> -	else
> -		return order_queue_worker(arg);
> +	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
> +
> +	if (burst) {
> +		if (flow_id_cap)
> +			return order_queue_worker_burst(arg, true);
> +		else
> +			return order_queue_worker_burst(arg, false);
> +	} else {
> +		if (flow_id_cap)
> +			return order_queue_worker(arg, true);
> +		else
> +			return order_queue_worker(arg, false);
> +	}
>  }
> 
>  static int
> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
> index 43ccb1c..62019c1 100644
> --- a/app/test/test_eventdev.c
> +++ b/app/test/test_eventdev.c
> @@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
>  	if (!(info.event_dev_cap &
>  	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>  		pconf.enqueue_depth = info.max_event_port_enqueue_depth;
> -		pconf.disable_implicit_release = 1;
> +		pconf.event_port_cfg =
> RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>  		ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
>  		TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
> -		pconf.disable_implicit_release = 0;
> +		pconf.event_port_cfg = 0;
>  	}
> 
>  	ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c
> b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index 42ff4ee..f70ab0c 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
>  	struct rte_event_dev_config config = {
>  			.nb_event_queues = nb_queues,
>  			.nb_event_ports = nb_ports,
> +			.nb_single_link_event_port_queues = 1,
>  			.nb_events_limit  = 4096,
>  			.nb_event_queue_flows = 1024,
>  			.nb_event_port_dequeue_depth = 128,
> @@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
>  			.schedule_type = cdata.queue_type,
>  			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
>  			.nb_atomic_flows = 1024,
> -		.nb_atomic_order_sequences = 1024,
> +			.nb_atomic_order_sequences = 1024,
>  	};
>  	struct rte_event_queue_conf tx_q_conf = {
>  			.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
> @@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
>  	disable_implicit_release = (dev_info.event_dev_cap &
>  			RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
> 
> -	wkr_p_conf.disable_implicit_release = disable_implicit_release;
> +	wkr_p_conf.event_port_cfg = disable_implicit_release ?
> +		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
> 
>  	if (dev_info.max_num_events < config.nb_events_limit)
>  		config.nb_events_limit = dev_info.max_num_events;
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c
> b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index 55bb2f7..ca6cd20 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data
> *worker_data)
>  	struct rte_event_dev_config config = {
>  			.nb_event_queues = nb_queues,
>  			.nb_event_ports = nb_ports,
> +			.nb_single_link_event_port_queues = 0,
>  			.nb_events_limit  = 4096,
>  			.nb_event_queue_flows = 1024,
>  			.nb_event_port_dequeue_depth = 128,
> diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-
> event/l2fwd_event_generic.c
> index 2dc95e5..9a3167c 100644
> --- a/examples/l2fwd-event/l2fwd_event_generic.c
> +++ b/examples/l2fwd-event/l2fwd_event_generic.c
> @@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources
> *rsrc)
>  	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>  		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
> 
> -	event_p_conf.disable_implicit_release =
> -		evt_rsrc->disable_implicit_release;
> +	event_p_conf.event_port_cfg = 0;
> +	if (evt_rsrc->disable_implicit_release)
> +		event_p_conf.event_port_cfg |=
> +			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> +
>  	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
> 
>  	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-
> event/l2fwd_event_internal_port.c
> index 63d57b4..203a14c 100644
> --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
> +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
> @@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct
> l2fwd_resources *rsrc)
>  	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>  		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
> 
> -	event_p_conf.disable_implicit_release =
> -		evt_rsrc->disable_implicit_release;
> +	event_p_conf.event_port_cfg = 0;
> +	if (evt_rsrc->disable_implicit_release)
> +		event_p_conf.event_port_cfg |=
> +			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> 
>  	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>  								event_p_id++) {
> diff --git a/examples/l3fwd/l3fwd_event_generic.c
> b/examples/l3fwd/l3fwd_event_generic.c
> index f8c9843..c80573f 100644
> --- a/examples/l3fwd/l3fwd_event_generic.c
> +++ b/examples/l3fwd/l3fwd_event_generic.c
> @@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
>  	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>  		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
> 
> -	event_p_conf.disable_implicit_release =
> -		evt_rsrc->disable_implicit_release;
> +	event_p_conf.event_port_cfg = 0;
> +	if (evt_rsrc->disable_implicit_release)
> +		event_p_conf.event_port_cfg |=
> +			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> +
>  	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
> 
>  	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c
> b/examples/l3fwd/l3fwd_event_internal_port.c
> index 03ac581..9916a7f 100644
> --- a/examples/l3fwd/l3fwd_event_internal_port.c
> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
> @@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
>  	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>  		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
> 
> -	event_p_conf.disable_implicit_release =
> -		evt_rsrc->disable_implicit_release;
> +	event_p_conf.event_port_cfg = 0;
> +	if (evt_rsrc->disable_implicit_release)
> +		event_p_conf.event_port_cfg |=
> +			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> 
>  	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>  								event_p_id++) {
> --
> 2.6.4


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI
  2020-10-06  7:07  7% [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI Olivier Matz
  2020-10-06  7:07  7% ` [dpdk-dev] [PATCH 2/2] mempool: remove experimental tags Olivier Matz
@ 2020-10-06  8:15  4% ` Bruce Richardson
  2020-10-06  9:52  4% ` David Marchand
  2020-10-06 11:57  4% ` David Marchand
  3 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-10-06  8:15 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Andrew Rybchenko, Ray Kinsella, Neil Horman

On Tue, Oct 06, 2020 at 09:07:49AM +0200, Olivier Matz wrote:
> Remove the deprecated v20 ABI of rte_mempool_populate_iova() and
> rte_mempool_populate_virt().
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  lib/librte_mempool/meson.build             |  2 -
>  lib/librte_mempool/rte_mempool.c           | 79 ++--------------------
>  lib/librte_mempool/rte_mempool_version.map |  7 --
>  3 files changed, 5 insertions(+), 83 deletions(-)
> 
Thanks for the cleanup.

Series-acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints
  2020-10-05 20:27  2% ` [dpdk-dev] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
@ 2020-10-06  8:15  0%   ` Van Haaren, Harry
  2020-10-12 19:06  0%   ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
  1 sibling, 0 replies; 200+ results
From: Van Haaren, Harry @ 2020-10-06  8:15 UTC (permalink / raw)
  To: McDaniel, Timothy, Hemant Agrawal, Nipun Gupta,
	Mattias Rönnblom, Jerin Jacob, Pavan Nikhilesh, Ma,
	 Liang J, Mccarthy, Peter, Rao, Nikhil, Ray Kinsella,
	Neil Horman
  Cc: dev, Carrillo, Erik G, Eads, Gage

> -----Original Message-----
> From: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Sent: Monday, October 5, 2020 9:28 PM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; Nipun Gupta
> <nipun.gupta@nxp.com>; Mattias Rönnblom <mattias.ronnblom@ericsson.com>;
> Jerin Jacob <jerinj@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>;
> Ma, Liang J <liang.j.ma@intel.com>; Mccarthy, Peter <peter.mccarthy@intel.com>;
> Van Haaren, Harry <harry.van.haaren@intel.com>; Rao, Nikhil
> <nikhil.rao@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil Horman
> <nhorman@tuxdriver.com>
> Cc: dev@dpdk.org; Carrillo, Erik G <erik.g.carrillo@intel.com>; Eads, Gage
> <gage.eads@intel.com>
> Subject: [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints
> 
> This commit implements the eventdev ABI changes required by
> the DLB PMD.
> 
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---

I think patches 1/2 and 2/2 will have to be merged into a single patch.
Reason is that compilation fails after 1/2 because apps/examples access
the "disable_implicit_release" field, which is refactored in 1/2.

For review, this split is quite convenient for reviewing - so suggesting we can
merge on apply if no changes are required?

<snip>
>  drivers/event/sw/sw_evdev.c                    |  8 ++--
>  drivers/event/sw/sw_evdev_selftest.c           |  6 +--

For SW PMD component and selftests;
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>


>  static void
> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
> index 98dae71..058f568 100644
> --- a/drivers/event/sw/sw_evdev.c
> +++ b/drivers/event/sw/sw_evdev.c
> @@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
>  	}
> 
>  	p->inflight_max = conf->new_event_threshold;
> -	p->implicit_release = !conf->disable_implicit_release;
> +	p->implicit_release = !(conf->event_port_cfg &
> +				RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
> 
>  	/* check if ring exists, same as rx_worker above */
>  	snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
> @@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t
> port_id,
>  	port_conf->new_event_threshold = 1024;
>  	port_conf->dequeue_depth = 16;
>  	port_conf->enqueue_depth = 16;
> -	port_conf->disable_implicit_release = 0;
> +	port_conf->event_port_cfg = 0;
>  }
> 
>  static int
> @@ -615,7 +616,8 @@ sw_info_get(struct rte_eventdev *dev, struct
> rte_event_dev_info *info)
> 
> 	RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
>  				RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>  				RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
> -				RTE_EVENT_DEV_CAP_NONSEQ_MODE),
> +				RTE_EVENT_DEV_CAP_NONSEQ_MODE |
> +				RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
>  	};
> 
>  	*info = evdev_sw_info;
> diff --git a/drivers/event/sw/sw_evdev_selftest.c
> b/drivers/event/sw/sw_evdev_selftest.c
> index 38c21fa..4a7d823 100644
> --- a/drivers/event/sw/sw_evdev_selftest.c
> +++ b/drivers/event/sw/sw_evdev_selftest.c
> @@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
>  			.new_event_threshold = 1024,
>  			.dequeue_depth = 32,
>  			.enqueue_depth = 64,
> -			.disable_implicit_release = 0,
>  	};
>  	if (num_ports > MAX_PORTS)
>  		return -1;
> @@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
>  				.new_event_threshold = 128,
>  				.dequeue_depth = 32,
>  				.enqueue_depth = 64,
> -				.disable_implicit_release = 0,
>  		};
>  		if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>  			printf("%d Error setting up port\n", __LINE__);
> @@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
>  		.new_event_threshold = 128,
>  		.dequeue_depth = 32,
>  		.enqueue_depth = 64,
> -		.disable_implicit_release = 0,
>  	};
>  	if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>  		printf("%d Error setting up port\n", __LINE__);
> @@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t
> disable_implicit_release)
>  	 * only be initialized once - and this needs to be set for multiple runs
>  	 */
>  	conf.new_event_threshold = 512;
> -	conf.disable_implicit_release = disable_implicit_release;
> +	conf.event_port_cfg = disable_implicit_release ?
> +		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
> 
>  	if (rte_event_port_setup(evdev, 0, &conf) < 0) {
>  		printf("Error setting up RX port\n");


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 2/2] mempool: remove experimental tags
  2020-10-06  7:07  7% [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI Olivier Matz
@ 2020-10-06  7:07  7% ` Olivier Matz
  2020-10-06  8:15  4% ` [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI Bruce Richardson
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2020-10-06  7:07 UTC (permalink / raw)
  To: dev; +Cc: Andrew Rybchenko, Ray Kinsella, Neil Horman

Move symbols introduced in version <= 19.11 in the stable ABI.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.h           | 32 ----------------------
 lib/librte_mempool/rte_mempool_version.map | 12 +++-----
 2 files changed, 4 insertions(+), 40 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 9ea7ff934c..c551cf733a 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -191,9 +191,6 @@ struct rte_mempool_memhdr {
 };
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * Additional information about the mempool
  *
  * The structure is cache-line aligned to avoid ABI breakages in
@@ -358,9 +355,6 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * @internal Check contiguous object blocks and update cookies or panic.
  *
  * @param mp
@@ -421,9 +415,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 		void **obj_table, unsigned int n);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * Dequeue a number of contiguous object blocks from the external pool.
  */
 typedef int (*rte_mempool_dequeue_contig_blocks_t)(struct rte_mempool *mp,
@@ -462,9 +453,6 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
 		size_t *min_chunk_size, size_t *align);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * @internal Helper to calculate memory size required to store given
  * number of objects.
  *
@@ -499,7 +487,6 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
  * @return
  *   Required memory size.
  */
-__rte_experimental
 ssize_t rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp,
 		uint32_t obj_num, uint32_t pg_shift, size_t chunk_reserve,
 		size_t *min_chunk_size, size_t *align);
@@ -569,9 +556,6 @@ typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp,
 #define RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ 0x0001
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * @internal Helper to populate memory pool object using provided memory
  * chunk: just slice objects one by one, taking care of not
  * crossing page boundaries.
@@ -603,7 +587,6 @@ typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp,
  * @return
  *   The number of objects added in mempool.
  */
-__rte_experimental
 int rte_mempool_op_populate_helper(struct rte_mempool *mp,
 		unsigned int flags, unsigned int max_objs,
 		void *vaddr, rte_iova_t iova, size_t len,
@@ -621,9 +604,6 @@ int rte_mempool_op_populate_default(struct rte_mempool *mp,
 		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * Get some additional information about a mempool.
  */
 typedef int (*rte_mempool_get_info_t)(const struct rte_mempool *mp,
@@ -846,9 +826,6 @@ int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     void *obj_cb_arg);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * Wrapper for mempool_ops get_info callback.
  *
  * @param[in] mp
@@ -860,7 +837,6 @@ int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
  *        mempool information
  *   - -ENOTSUP - doesn't support get_info ops (valid case).
  */
-__rte_experimental
 int rte_mempool_ops_get_info(const struct rte_mempool *mp,
 			 struct rte_mempool_info *info);
 
@@ -1577,9 +1553,6 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * Get a contiguous blocks of objects from the mempool.
  *
  * If cache is enabled, consider to flush it first, to reuse objects
@@ -1601,7 +1574,6 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
  *   - -EOPNOTSUPP: The mempool driver does not support block dequeue
  */
 static __rte_always_inline int
-__rte_experimental
 rte_mempool_get_contig_blocks(struct rte_mempool *mp,
 			      void **first_obj_table, unsigned int n)
 {
@@ -1786,13 +1758,9 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg),
 		      void *arg);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * @internal Get page size used for mempool object allocation.
  * This function is internal to mempool library and mempool drivers.
  */
-__rte_experimental
 int
 rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 83760ecfc9..50b0602952 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -12,13 +12,17 @@ DPDK_21 {
 	rte_mempool_create_empty;
 	rte_mempool_dump;
 	rte_mempool_free;
+	rte_mempool_get_page_size;
 	rte_mempool_in_use_count;
 	rte_mempool_list_dump;
 	rte_mempool_lookup;
 	rte_mempool_mem_iter;
 	rte_mempool_obj_iter;
 	rte_mempool_op_calc_mem_size_default;
+	rte_mempool_op_calc_mem_size_helper;
 	rte_mempool_op_populate_default;
+	rte_mempool_op_populate_helper;
+	rte_mempool_ops_get_info;
 	rte_mempool_ops_table;
 	rte_mempool_populate_anon;
 	rte_mempool_populate_default;
@@ -34,14 +38,6 @@ DPDK_21 {
 EXPERIMENTAL {
 	global:
 
-	# added in 18.05
-	rte_mempool_ops_get_info;
-
-	# added in 19.11
-	rte_mempool_get_page_size;
-	rte_mempool_op_calc_mem_size_helper;
-	rte_mempool_op_populate_helper;
-
 	# added in 20.05
 	__rte_mempool_trace_ops_dequeue_bulk;
 	__rte_mempool_trace_ops_dequeue_contig_blocks;
-- 
2.25.1


^ permalink raw reply	[relevance 7%]

* [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI
@ 2020-10-06  7:07  7% Olivier Matz
  2020-10-06  7:07  7% ` [dpdk-dev] [PATCH 2/2] mempool: remove experimental tags Olivier Matz
                   ` (3 more replies)
  0 siblings, 4 replies; 200+ results
From: Olivier Matz @ 2020-10-06  7:07 UTC (permalink / raw)
  To: dev; +Cc: Andrew Rybchenko, Ray Kinsella, Neil Horman

Remove the deprecated v20 ABI of rte_mempool_populate_iova() and
rte_mempool_populate_virt().

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/meson.build             |  2 -
 lib/librte_mempool/rte_mempool.c           | 79 ++--------------------
 lib/librte_mempool/rte_mempool_version.map |  7 --
 3 files changed, 5 insertions(+), 83 deletions(-)

diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 7dbe6b9bea..a6e861cbfc 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -9,8 +9,6 @@ foreach flag: extra_flags
 	endif
 endforeach
 
-use_function_versioning = true
-
 sources = files('rte_mempool.c', 'rte_mempool_ops.c',
 		'rte_mempool_ops_default.c', 'mempool_trace_points.c')
 headers = files('rte_mempool.h', 'rte_mempool_trace.h',
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 7774f0c8da..0e3a2a7635 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -30,7 +30,6 @@
 #include <rte_string_fns.h>
 #include <rte_spinlock.h>
 #include <rte_tailq.h>
-#include <rte_function_versioning.h>
 #include <rte_eal_paging.h>
 
 
@@ -305,17 +304,12 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
 	return 0;
 }
 
-__vsym int
-rte_mempool_populate_iova_v21(struct rte_mempool *mp, char *vaddr,
-	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
-	void *opaque);
-
 /* Add objects in the pool, using a physically contiguous memory
  * zone. Return the number of objects added, or a negative value
  * on error.
  */
-__vsym int
-rte_mempool_populate_iova_v21(struct rte_mempool *mp, char *vaddr,
+int
+rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
@@ -375,35 +369,6 @@ rte_mempool_populate_iova_v21(struct rte_mempool *mp, char *vaddr,
 	return ret;
 }
 
-BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v21, 21);
-MAP_STATIC_SYMBOL(
-	int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
-				rte_iova_t iova, size_t len,
-				rte_mempool_memchunk_free_cb_t *free_cb,
-				void *opaque),
-	rte_mempool_populate_iova_v21);
-
-__vsym int
-rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
-	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
-	void *opaque);
-
-__vsym int
-rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
-	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
-	void *opaque)
-{
-	int ret;
-
-	ret = rte_mempool_populate_iova_v21(mp, vaddr, iova, len, free_cb,
-					opaque);
-	if (ret == 0)
-		ret = -EINVAL;
-
-	return ret;
-}
-VERSION_SYMBOL(rte_mempool_populate_iova, _v20, 20.0);
-
 static rte_iova_t
 get_iova(void *addr)
 {
@@ -417,16 +382,11 @@ get_iova(void *addr)
 	return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
 }
 
-__vsym int
-rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
-	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
-	void *opaque);
-
 /* Populate the mempool with a virtual area. Return the number of
  * objects added, or a negative value on error.
  */
-__vsym int
-rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
+int
+rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
@@ -459,7 +419,7 @@ rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
 				break;
 		}
 
-		ret = rte_mempool_populate_iova_v21(mp, addr + off, iova,
+		ret = rte_mempool_populate_iova(mp, addr + off, iova,
 			phys_len, free_cb, opaque);
 		if (ret == 0)
 			continue;
@@ -477,35 +437,6 @@ rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
 	rte_mempool_free_memchunks(mp);
 	return ret;
 }
-BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v21, 21);
-MAP_STATIC_SYMBOL(
-	int rte_mempool_populate_virt(struct rte_mempool *mp,
-				char *addr, size_t len, size_t pg_sz,
-				rte_mempool_memchunk_free_cb_t *free_cb,
-				void *opaque),
-	rte_mempool_populate_virt_v21);
-
-__vsym int
-rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
-	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
-	void *opaque);
-
-__vsym int
-rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
-	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
-	void *opaque)
-{
-	int ret;
-
-	ret = rte_mempool_populate_virt_v21(mp, addr, len, pg_sz,
-						free_cb, opaque);
-
-	if (ret == 0)
-		ret = -EINVAL;
-
-	return ret;
-}
-VERSION_SYMBOL(rte_mempool_populate_virt, _v20, 20.0);
 
 /* Get the minimal page size used in a mempool before populating it. */
 int
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 50e22ee020..83760ecfc9 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -31,13 +31,6 @@ DPDK_21 {
 	local: *;
 };
 
-DPDK_20.0 {
-	global:
-
-	rte_mempool_populate_iova;
-	rte_mempool_populate_virt;
-};
-
 EXPERIMENTAL {
 	global:
 
-- 
2.25.1


^ permalink raw reply	[relevance 7%]

* Re: [dpdk-dev] [PATCH v2] drivers/common: mark all symbols as internal
  2020-10-01  8:00  0%   ` Kinsella, Ray
@ 2020-10-05 23:16  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-10-05 23:16 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Anoob Joseph, Neil Horman, Liron Himi, Harman Kalra, Kinsella, Ray

01/10/2020 10:00, Kinsella, Ray:
> On 01/10/2020 08:55, David Marchand wrote:
> > Now that we have the internal tag, let's avoid confusion with exported
> > symbols in common drivers that were using the experimental tag as a
> > workaround.
> > There is also no need to put internal API symbols in the public stable
> > ABI.
> > 
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > Acked-by: Anoob Joseph <anoobj@marvell.com>
> 
> Acked-by: Ray Kinsella <mdr@ashroe.eu>

Applied, thanks



^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI
  2020-10-05 20:27  9% [dpdk-dev] [PATCH v2 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
  2020-10-05 20:27  2% ` [dpdk-dev] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
@ 2020-10-05 20:27  6% ` Timothy McDaniel
  2020-10-06  8:26  4%   ` Van Haaren, Harry
  1 sibling, 1 reply; 200+ results
From: Timothy McDaniel @ 2020-10-05 20:27 UTC (permalink / raw)
  To: Jerin Jacob, Harry van Haaren, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, erik.g.carrillo, gage.eads, hemant.agrawal

Several data structures and constants changed, or were added,
in the previous patch.  This commit updates the dependent
apps and examples to use the new ABI.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 app/test-eventdev/evt_common.h                     | 11 ++++++++
 app/test-eventdev/test_order_atq.c                 | 28 +++++++++++++++------
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 ++++++++++++++++------
 app/test/test_eventdev.c                           |  4 +--
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +++--
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++++--
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +++--
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++++--
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +++--
 11 files changed, 80 insertions(+), 26 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
 			true : false;
 }
 
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+	struct rte_event_dev_info dev_info;
+
+	rte_event_dev_info_get(dev_id, &dev_info);
+	return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+			true : false;
+}
+
 static inline int
 evt_service_setup(uint32_t service_id)
 {
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
 			.dequeue_timeout_ns = opt->deq_tmo_nsec,
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = info.max_num_events,
 			.nb_event_queue_flows = opt->nb_flows,
 			.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.sub_event_type == 0) { /* stage 0 from producer */
 			order_atq_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
 }
 
 static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].sub_event_type == 0) { /*stage 0 */
 				order_atq_process_stage_0(&ev[i]);
 			} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_atq_worker_burst(arg);
-	else
-		return order_atq_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_atq_worker_burst(arg, true);
+		else
+			return order_atq_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_atq_worker(arg, true);
+		else
+			return order_atq_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
 		const uint32_t flow = (uintptr_t)m % nb_flows;
 		/* Maintain seq number per flow */
 		m->seqn = producer_flow_seq[flow]++;
+		m->udata64 = flow;
 
 		ev.flow_id = flow;
 		ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
 }
 
 static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
 			continue;
 		}
 
+		if (!flow_id_cap)
+			ev.flow_id = ev.mbuf->udata64;
+
 		if (ev.queue_id == 0) { /* from ordered queue */
 			order_queue_process_stage_0(&ev);
 			while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
 }
 
 static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
 {
 	ORDER_WORKER_INIT;
 	struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
 		}
 
 		for (i = 0; i < nb_rx; i++) {
+
+			if (!flow_id_cap)
+				ev[i].flow_id = ev[i].mbuf->udata64;
+
 			if (ev[i].queue_id == 0) { /* from ordered queue */
 				order_queue_process_stage_0(&ev[i]);
 			} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
 {
 	struct worker_data *w  = arg;
 	const bool burst = evt_has_burst_mode(w->dev_id);
-
-	if (burst)
-		return order_queue_worker_burst(arg);
-	else
-		return order_queue_worker(arg);
+	const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+	if (burst) {
+		if (flow_id_cap)
+			return order_queue_worker_burst(arg, true);
+		else
+			return order_queue_worker_burst(arg, false);
+	} else {
+		if (flow_id_cap)
+			return order_queue_worker(arg, true);
+		else
+			return order_queue_worker(arg, false);
+	}
 }
 
 static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
 	if (!(info.event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		pconf.enqueue_depth = info.max_event_port_enqueue_depth;
-		pconf.disable_implicit_release = 1;
+		pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 		ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
 		TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
-		pconf.disable_implicit_release = 0;
+		pconf.event_port_cfg = 0;
 	}
 
 	ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 1,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
 			.schedule_type = cdata.queue_type,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
-		.nb_atomic_order_sequences = 1024,
+			.nb_atomic_order_sequences = 1024,
 	};
 	struct rte_event_queue_conf tx_q_conf = {
 			.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
 	disable_implicit_release = (dev_info.event_dev_cap &
 			RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
 
-	wkr_p_conf.disable_implicit_release = disable_implicit_release;
+	wkr_p_conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (dev_info.max_num_events < config.nb_events_limit)
 		config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
 	struct rte_event_dev_config config = {
 			.nb_event_queues = nb_queues,
 			.nb_event_ports = nb_ports,
+			.nb_single_link_event_port_queues = 0,
 			.nb_events_limit  = 4096,
 			.nb_event_queue_flows = 1024,
 			.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
 	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
 	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
 		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
 
-	event_p_conf.disable_implicit_release =
-		evt_rsrc->disable_implicit_release;
+	event_p_conf.event_port_cfg = 0;
+	if (evt_rsrc->disable_implicit_release)
+		event_p_conf.event_port_cfg |=
+			RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
 
 	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
 								event_p_id++) {
-- 
2.6.4


^ permalink raw reply	[relevance 6%]

* [dpdk-dev] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints
  2020-10-05 20:27  9% [dpdk-dev] [PATCH v2 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
@ 2020-10-05 20:27  2% ` Timothy McDaniel
  2020-10-06  8:15  0%   ` Van Haaren, Harry
  2020-10-12 19:06  0%   ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
  2020-10-05 20:27  6% ` [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
  1 sibling, 2 replies; 200+ results
From: Timothy McDaniel @ 2020-10-05 20:27 UTC (permalink / raw)
  To: Hemant Agrawal, Nipun Gupta, Mattias Rönnblom, Jerin Jacob,
	Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren,
	Nikhil Rao, Ray Kinsella, Neil Horman
  Cc: dev, erik.g.carrillo, gage.eads

This commit implements the eventdev ABI changes required by
the DLB PMD.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dpaa/dpaa_eventdev.c             |  3 +-
 drivers/event/dpaa2/dpaa2_eventdev.c           |  5 +-
 drivers/event/dsw/dsw_evdev.c                  |  3 +-
 drivers/event/octeontx/ssovf_evdev.c           |  5 +-
 drivers/event/octeontx2/otx2_evdev.c           |  3 +-
 drivers/event/opdl/opdl_evdev.c                |  3 +-
 drivers/event/skeleton/skeleton_eventdev.c     |  5 +-
 drivers/event/sw/sw_evdev.c                    |  8 ++--
 drivers/event/sw/sw_evdev_selftest.c           |  6 +--
 lib/librte_eventdev/rte_event_eth_tx_adapter.c |  2 +-
 lib/librte_eventdev/rte_eventdev.c             | 66 +++++++++++++++++++++++---
 lib/librte_eventdev/rte_eventdev.h             | 51 ++++++++++++++++----
 lib/librte_eventdev/rte_eventdev_pmd_pci.h     |  1 -
 lib/librte_eventdev/rte_eventdev_trace.h       |  7 +--
 lib/librte_eventdev/rte_eventdev_version.map   |  4 +-
 15 files changed, 134 insertions(+), 38 deletions(-)

diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index b5ae87a..07cd079 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
 		RTE_EVENT_DEV_CAP_BURST_MODE |
 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-		RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 3ae4441..712db6c 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
-		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
+		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 
 }
 
@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 		DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
 	port_conf->enqueue_depth =
 		DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index e796975..933a5a5 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
 		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE|
-		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
+		RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
+		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
 	};
 }
 
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 4fc4e8f..1c6bcca 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
 					RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
 					RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 					RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-					RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 
 }
 
@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = edev->max_num_events;
 	port_conf->dequeue_depth = 1;
 	port_conf->enqueue_depth = 1;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static void
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index b8b57c3..ae35bb5 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
 					RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
 					RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 					RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-					RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static void
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9b2f75f..3050578 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 		.max_event_port_dequeue_depth = MAX_OPDL_CONS_Q_DEPTH,
 		.max_event_port_enqueue_depth = MAX_OPDL_CONS_Q_DEPTH,
 		.max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
-		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE,
+		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
+				 RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
 	};
 
 	*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index c889220..6fd1102 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
 	dev_info->max_num_events = (1ULL << 20);
 	dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
 					RTE_EVENT_DEV_CAP_BURST_MODE |
-					RTE_EVENT_DEV_CAP_EVENT_QOS;
+					RTE_EVENT_DEV_CAP_EVENT_QOS |
+					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
 static int
@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = 32 * 1024;
 	port_conf->dequeue_depth = 16;
 	port_conf->enqueue_depth = 16;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static void
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 98dae71..058f568 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
 	}
 
 	p->inflight_max = conf->new_event_threshold;
-	p->implicit_release = !conf->disable_implicit_release;
+	p->implicit_release = !(conf->event_port_cfg &
+				RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
 
 	/* check if ring exists, same as rx_worker above */
 	snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
@@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
 	port_conf->new_event_threshold = 1024;
 	port_conf->dequeue_depth = 16;
 	port_conf->enqueue_depth = 16;
-	port_conf->disable_implicit_release = 0;
+	port_conf->event_port_cfg = 0;
 }
 
 static int
@@ -615,7 +616,8 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 				RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
 				RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 				RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
-				RTE_EVENT_DEV_CAP_NONSEQ_MODE),
+				RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+				RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
 	};
 
 	*info = evdev_sw_info;
diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
index 38c21fa..4a7d823 100644
--- a/drivers/event/sw/sw_evdev_selftest.c
+++ b/drivers/event/sw/sw_evdev_selftest.c
@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
 			.new_event_threshold = 1024,
 			.dequeue_depth = 32,
 			.enqueue_depth = 64,
-			.disable_implicit_release = 0,
 	};
 	if (num_ports > MAX_PORTS)
 		return -1;
@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
 				.new_event_threshold = 128,
 				.dequeue_depth = 32,
 				.enqueue_depth = 64,
-				.disable_implicit_release = 0,
 		};
 		if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
 			printf("%d Error setting up port\n", __LINE__);
@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
 		.new_event_threshold = 128,
 		.dequeue_depth = 32,
 		.enqueue_depth = 64,
-		.disable_implicit_release = 0,
 	};
 	if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
 		printf("%d Error setting up port\n", __LINE__);
@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
 	 * only be initialized once - and this needs to be set for multiple runs
 	 */
 	conf.new_event_threshold = 512;
-	conf.disable_implicit_release = disable_implicit_release;
+	conf.event_port_cfg = disable_implicit_release ?
+		RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
 
 	if (rte_event_port_setup(evdev, 0, &conf) < 0) {
 		printf("Error setting up RX port\n");
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
index bb21dc4..8a72256 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
 		return ret;
 	}
 
-	pc->disable_implicit_release = 0;
+	pc->event_port_cfg = 0;
 	ret = rte_event_port_setup(dev_id, port_id, pc);
 	if (ret) {
 		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 82c177c..3a5b738 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -32,6 +32,7 @@
 #include <rte_ethdev.h>
 #include <rte_cryptodev.h>
 #include <rte_cryptodev_pmd.h>
+#include <rte_compat.h>
 
 #include "rte_eventdev.h"
 #include "rte_eventdev_pmd.h"
@@ -437,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
 					dev_id);
 		return -EINVAL;
 	}
-	if (dev_conf->nb_event_queues > info.max_event_queues) {
-		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
-		dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+	if (dev_conf->nb_event_queues > info.max_event_queues +
+			info.max_single_link_event_port_queue_pairs) {
+		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
+				 dev_id, dev_conf->nb_event_queues,
+				 info.max_event_queues,
+				 info.max_single_link_event_port_queue_pairs);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queues -
+			dev_conf->nb_single_link_event_port_queues >
+			info.max_event_queues) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
+				 dev_id, dev_conf->nb_event_queues,
+				 dev_conf->nb_single_link_event_port_queues,
+				 info.max_event_queues);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_single_link_event_port_queues >
+			dev_conf->nb_event_queues) {
+		RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
+				 dev_id,
+				 dev_conf->nb_single_link_event_port_queues,
+				 dev_conf->nb_event_queues);
 		return -EINVAL;
 	}
 
@@ -448,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
 		RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
 		return -EINVAL;
 	}
-	if (dev_conf->nb_event_ports > info.max_event_ports) {
-		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
-		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+	if (dev_conf->nb_event_ports > info.max_event_ports +
+			info.max_single_link_event_port_queue_pairs) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
+				 dev_id, dev_conf->nb_event_ports,
+				 info.max_event_ports,
+				 info.max_single_link_event_port_queue_pairs);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_ports -
+			dev_conf->nb_single_link_event_port_queues
+			> info.max_event_ports) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
+				 dev_id, dev_conf->nb_event_ports,
+				 dev_conf->nb_single_link_event_port_queues,
+				 info.max_event_ports);
+		return -EINVAL;
+	}
+
+	if (dev_conf->nb_single_link_event_port_queues >
+	    dev_conf->nb_event_ports) {
+		RTE_EDEV_LOG_ERR(
+				 "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
+				 dev_id,
+				 dev_conf->nb_single_link_event_port_queues,
+				 dev_conf->nb_event_ports);
 		return -EINVAL;
 	}
 
@@ -737,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
 		return -EINVAL;
 	}
 
-	if (port_conf && port_conf->disable_implicit_release &&
+	if (port_conf &&
+	    (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
 	    !(dev->data->event_dev_cap &
 	      RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
 		RTE_EDEV_LOG_ERR(
@@ -830,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 	case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
 		*attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
 		break;
+	case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
+	{
+		uint32_t config;
+
+		config = dev->data->ports_cfg[port_id].event_port_cfg;
+		*attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
+		break;
+	}
 	default:
 		return -EINVAL;
 	};
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 7dc8323..ce1fc2c 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -291,6 +291,12 @@ struct rte_event;
  * single queue to each port or map a single queue to many port.
  */
 
+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
+/**< Event device preserves the flow ID from the enqueued
+ * event to the dequeued event if the flag is set. Otherwise,
+ * the content of this field is implementation dependent.
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -380,6 +386,10 @@ struct rte_event_dev_info {
 	 * event port by this device.
 	 * A device that does not support bulk enqueue will set this as 1.
 	 */
+	uint8_t max_event_port_links;
+	/**< Maximum number of queues that can be linked to a single event
+	 * port by this device.
+	 */
 	int32_t max_num_events;
 	/**< A *closed system* event dev has a limit on the number of events it
 	 * can manage at a time. An *open system* event dev does not have a
@@ -387,6 +397,12 @@ struct rte_event_dev_info {
 	 */
 	uint32_t event_dev_cap;
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	uint8_t max_single_link_event_port_queue_pairs;
+	/**< Maximum number of event ports and queues that are optimized for
+	 * (and only capable of) single-link configurations supported by this
+	 * device. These ports and queues are not accounted for in
+	 * max_event_ports or max_event_queues.
+	 */
 };
 
 /**
@@ -494,6 +510,14 @@ struct rte_event_dev_config {
 	 */
 	uint32_t event_dev_cfg;
 	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+	uint8_t nb_single_link_event_port_queues;
+	/**< Number of event ports and queues that will be singly-linked to
+	 * each other. These are a subset of the overall event ports and
+	 * queues; this value cannot exceed *nb_event_ports* or
+	 * *nb_event_queues*. If the device has ports and queues that are
+	 * optimized for single-link usage, this field is a hint for how many
+	 * to allocate; otherwise, regular event ports and queues can be used.
+	 */
 };
 
 /**
@@ -519,7 +543,6 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf);
 
-
 /* Event queue specific APIs */
 
 /* Event queue configuration bitmap flags */
@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 
 /* Event port specific APIs */
 
+/* Event port configuration bitmap flags */
+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
+/**< Configure the port not to release outstanding events in
+ * rte_event_dev_dequeue_burst(). If set, all events received through
+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
+ */
+#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
+/**< This event port links only to a single event queue.
+ *
+ *  @see rte_event_port_setup(), rte_event_port_link()
+ */
+
 /** Event port configuration structure */
 struct rte_event_port_conf {
 	int32_t new_event_threshold;
@@ -698,13 +735,7 @@ struct rte_event_port_conf {
 	 * which previously supplied to rte_event_dev_configure().
 	 * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
 	 */
-	uint8_t disable_implicit_release;
-	/**< Configure the port not to release outstanding events in
-	 * rte_event_dev_dequeue_burst(). If true, all events received through
-	 * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
-	 * RTE_EVENT_OP_FORWARD. Must be false when the device is not
-	 * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
-	 */
+	uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
 };
 
 /**
@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
  * The new event threshold of the port
  */
 #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
+/**
+ * The implicit release disable attribute of the port
+ */
+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
 
 /**
  * Get an attribute from a port.
diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
index 443cd38..a3f9244 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
 	return -ENXIO;
 }
 
-
 /**
  * @internal
  * Wrapper for use by pci drivers as a .remove function to detach a event
diff --git a/lib/librte_eventdev/rte_eventdev_trace.h b/lib/librte_eventdev/rte_eventdev_trace.h
index 4de6341..5ec43d8 100644
--- a/lib/librte_eventdev/rte_eventdev_trace.h
+++ b/lib/librte_eventdev/rte_eventdev_trace.h
@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_u32(dev_conf->nb_event_port_dequeue_depth);
 	rte_trace_point_emit_u32(dev_conf->nb_event_port_enqueue_depth);
 	rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
+	rte_trace_point_emit_u8(dev_conf->nb_single_link_event_port_queues);
 	rte_trace_point_emit_int(rc);
 )
 
@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 	rte_trace_point_emit_int(rc);
 )
 
@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 	rte_trace_point_emit_ptr(conf_cb);
 	rte_trace_point_emit_int(rc);
 )
@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_i32(port_conf->new_event_threshold);
 	rte_trace_point_emit_u16(port_conf->dequeue_depth);
 	rte_trace_point_emit_u16(port_conf->enqueue_depth);
-	rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+	rte_trace_point_emit_u32(port_conf->event_port_cfg);
 )
 
 RTE_TRACE_POINT(
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3d9d0ca..2846d04 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -100,7 +100,6 @@ EXPERIMENTAL {
 	# added in 20.05
 	__rte_eventdev_trace_configure;
 	__rte_eventdev_trace_queue_setup;
-	__rte_eventdev_trace_port_setup;
 	__rte_eventdev_trace_port_link;
 	__rte_eventdev_trace_port_unlink;
 	__rte_eventdev_trace_start;
@@ -134,4 +133,7 @@ EXPERIMENTAL {
 	__rte_eventdev_trace_crypto_adapter_queue_pair_del;
 	__rte_eventdev_trace_crypto_adapter_start;
 	__rte_eventdev_trace_crypto_adapter_stop;
+
+	# changed in 20.11
+	__rte_eventdev_trace_port_setup;
 };
-- 
2.6.4


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v2 0/2] Eventdev ABI changes for DLB/DLB2
@ 2020-10-05 20:27  9% Timothy McDaniel
  2020-10-05 20:27  2% ` [dpdk-dev] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
  2020-10-05 20:27  6% ` [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
  0 siblings, 2 replies; 200+ results
From: Timothy McDaniel @ 2020-10-05 20:27 UTC (permalink / raw)
  Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, hemant.agrawal

This series implements the eventdev ABI changes required by
the DLB and DLB2 PMDs. This ABI change was announced in the
20.08 release notes [1]. This patch was initially part of
the V1 DLB PMD patchset.

The DLB hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports are further restricted to a maximum of 1 linked queue.
3) It does not (currently) have the ability to carry the flow_id as part
of the event (QE) payload.

Due to the above, we would like to propose the following enhancements.

1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertise its capabilities so that applications can take
the appropriate actions based on those capabilities.

2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.

3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to another, with the remaining bits available for future
assignment.

Note that it was requested that we split this app/test
changes out from the eventdev ABI patch. As a result,
neither of these patches will build without the other
also being applied.

Major changes since V1:
Reworded commit message, as requested
Fixed errors reported by clang

Testing showed no performance impact due to the flow_id template code
added to test app.

[1] http://mails.dpdk.org/archives/dev/2020-August/177261.html


Timothy McDaniel (2):
  eventdev: eventdev: express DLB/DLB2 PMD constraints
  eventdev: update app and examples for new eventdev ABI

 app/test-eventdev/evt_common.h                     | 11 ++++
 app/test-eventdev/test_order_atq.c                 | 28 ++++++---
 app/test-eventdev/test_order_common.c              |  1 +
 app/test-eventdev/test_order_queue.c               | 29 +++++++---
 app/test/test_eventdev.c                           |  4 +-
 drivers/event/dpaa/dpaa_eventdev.c                 |  3 +-
 drivers/event/dpaa2/dpaa2_eventdev.c               |  5 +-
 drivers/event/dsw/dsw_evdev.c                      |  3 +-
 drivers/event/octeontx/ssovf_evdev.c               |  5 +-
 drivers/event/octeontx2/otx2_evdev.c               |  3 +-
 drivers/event/opdl/opdl_evdev.c                    |  3 +-
 drivers/event/skeleton/skeleton_eventdev.c         |  5 +-
 drivers/event/sw/sw_evdev.c                        |  8 ++-
 drivers/event/sw/sw_evdev_selftest.c               |  6 +-
 .../eventdev_pipeline/pipeline_worker_generic.c    |  6 +-
 examples/eventdev_pipeline/pipeline_worker_tx.c    |  1 +
 examples/l2fwd-event/l2fwd_event_generic.c         |  7 ++-
 examples/l2fwd-event/l2fwd_event_internal_port.c   |  6 +-
 examples/l3fwd/l3fwd_event_generic.c               |  7 ++-
 examples/l3fwd/l3fwd_event_internal_port.c         |  6 +-
 lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
 lib/librte_eventdev/rte_eventdev.c                 | 66 +++++++++++++++++++---
 lib/librte_eventdev/rte_eventdev.h                 | 51 ++++++++++++++---
 lib/librte_eventdev/rte_eventdev_pmd_pci.h         |  1 -
 lib/librte_eventdev/rte_eventdev_trace.h           |  7 ++-
 lib/librte_eventdev/rte_eventdev_version.map       |  4 +-
 26 files changed, 214 insertions(+), 64 deletions(-)

-- 
2.6.4


^ permalink raw reply	[relevance 9%]

* [dpdk-dev] [PATCH v3 03/14] acl: remove of unused enum value
  2020-10-05 18:45  3% ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
@ 2020-10-05 18:45 20%   ` Konstantin Ananyev
  2020-10-06 15:03  3%   ` [dpdk-dev] [PATCH v4 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
  2020-10-06 15:05  3%   ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods David Marchand
  2 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2020-10-05 18:45 UTC (permalink / raw)
  To: dev; +Cc: jerinj, ruifeng.wang, vladimir.medvedkin, Konstantin Ananyev

Removal of unused enum value (RTE_ACL_CLASSIFY_NUM).
This enum value is not used inside DPDK, while it prevents
to add new classify algorithms without causing an ABI breakage.

Note that this change introduce a formal ABI incompatibility
with previous versions of ACL library.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 doc/guides/rel_notes/deprecation.rst   | 4 ----
 doc/guides/rel_notes/release_20_11.rst | 4 ++++
 lib/librte_acl/rte_acl.h               | 1 -
 3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8080a28896..938e967c8f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -209,10 +209,6 @@ Deprecation Notices
   - https://patches.dpdk.org/patch/71457/
   - https://patches.dpdk.org/patch/71456/
 
-* acl: ``RTE_ACL_CLASSIFY_NUM`` enum value will be removed.
-  This enum value is not used inside DPDK, while it prevents to add new
-  classify algorithms without causing an ABI breakage.
-
 * sched: To allow more traffic classes, flexible mapping of pipe queues to
   traffic classes, and subport level configuration of pipes and queues
   changes will be made to macros, data structures and API functions defined
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6d8c24413d..e0de60c0c2 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -210,6 +210,10 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* acl: ``RTE_ACL_CLASSIFY_NUM`` enum value has been removed.
+  This enum value was not used inside DPDK, while it prevented to add new
+  classify algorithms without causing an ABI breakage.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_acl/rte_acl.h b/lib/librte_acl/rte_acl.h
index aa22e70c6e..b814423a63 100644
--- a/lib/librte_acl/rte_acl.h
+++ b/lib/librte_acl/rte_acl.h
@@ -241,7 +241,6 @@ enum rte_acl_classify_alg {
 	RTE_ACL_CLASSIFY_AVX2 = 3,    /**< requires AVX2 support. */
 	RTE_ACL_CLASSIFY_NEON = 4,    /**< requires NEON support. */
 	RTE_ACL_CLASSIFY_ALTIVEC = 5,    /**< requires ALTIVEC support. */
-	RTE_ACL_CLASSIFY_NUM          /* should always be the last one. */
 };
 
 /**
-- 
2.17.1


^ permalink raw reply	[relevance 20%]

* [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods
  @ 2020-10-05 18:45  3% ` Konstantin Ananyev
  2020-10-05 18:45 20%   ` [dpdk-dev] [PATCH v3 03/14] acl: remove of unused enum value Konstantin Ananyev
                     ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: Konstantin Ananyev @ 2020-10-05 18:45 UTC (permalink / raw)
  To: dev; +Cc: jerinj, ruifeng.wang, vladimir.medvedkin, Konstantin Ananyev

These patch series introduce support of AVX512 specific classify
implementation for ACL library.
It adds two new algorithms:
 - RTE_ACL_CLASSIFY_AVX512X16 - can process up to 16 flows in parallel.
   It uses 256-bit width instructions/registers only
   (to avoid frequency level change).
   On my SKX box test-acl shows ~15-30% improvement
   (depending on rule-set and input burst size)
   when switching from AVX2 to AVX512X16 classify algorithms.
 - RTE_ACL_CLASSIFY_AVX512X32 - can process up to 32 flows in parallel.
   It uses 512-bit width instructions/registers and provides higher
   performance then AVX512X16, but can cause frequency level change.
   On my SKX box test-acl shows ~50-70% improvement
   (depending on rule-set and input burst size)
   when switching from AVX2 to AVX512X32 classify algorithms.
   ICX and CLX testing showed similar level of speedup.

Current AVX512 classify implementation is only supported on x86_64.
Note that this series introduce a formal ABI incompatibility
with previous versions of ACL library.

v2 -> v3:
  Fix checkpatch warnings
  Split AVX512 algorithm into two and deduplicate common code
v1 -> v2:
  Deduplicated 8/16 code paths as much as possible
  Updated default algorithm selection
    Removed library constructor to make it easier integrate with
    https://patches.dpdk.org/project/dpdk/list/?series=11831
  Updated docs

These patch series depends on:
https://patches.dpdk.org/patch/79310/
to be applied first.

Konstantin Ananyev (14):
  acl: fix x86 build when compiler doesn't support AVX2
  doc: fix missing classify methods in ACL guide
  acl: remove of unused enum value
  acl: remove library constructor
  app/acl: few small improvements
  test/acl: expand classify test coverage
  acl: add infrastructure to support AVX512 classify
  acl: introduce 256-bit width AVX512 classify implementation
  acl: update default classify algorithm selection
  acl: introduce 512-bit width AVX512 classify implementation
  acl: for AVX512 classify use 4B load whenever possible
  acl: deduplicate AVX512 code paths
  test/acl: add AVX512 classify support
  app/acl: add AVX512 classify support

 app/test-acl/main.c                           |  23 +-
 app/test/test_acl.c                           | 105 ++--
 config/x86/meson.build                        |   3 +-
 .../prog_guide/packet_classif_access_ctrl.rst |  20 +
 doc/guides/rel_notes/deprecation.rst          |   4 -
 doc/guides/rel_notes/release_20_11.rst        |  12 +
 lib/librte_acl/acl.h                          |  16 +
 lib/librte_acl/acl_bld.c                      |  34 ++
 lib/librte_acl/acl_gen.c                      |   2 +-
 lib/librte_acl/acl_run_avx512.c               | 164 ++++++
 lib/librte_acl/acl_run_avx512_common.h        | 477 ++++++++++++++++++
 lib/librte_acl/acl_run_avx512x16.h            | 341 +++++++++++++
 lib/librte_acl/acl_run_avx512x8.h             | 253 ++++++++++
 lib/librte_acl/meson.build                    |  39 ++
 lib/librte_acl/rte_acl.c                      | 212 ++++++--
 lib/librte_acl/rte_acl.h                      |   4 +-
 16 files changed, 1609 insertions(+), 100 deletions(-)
 create mode 100644 lib/librte_acl/acl_run_avx512.c
 create mode 100644 lib/librte_acl/acl_run_avx512_common.h
 create mode 100644 lib/librte_acl/acl_run_avx512x16.h
 create mode 100644 lib/librte_acl/acl_run_avx512x8.h

-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v3 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
  @ 2020-10-05 16:26  2% ` Vikas Gupta
  2020-10-07 16:45  2%   ` [dpdk-dev] [PATCH v4 " Vikas Gupta
  0 siblings, 1 reply; 200+ results
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
  To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta

Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit. 
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.

The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework. 

The patchset has been tested on the above mentioned SoCs.

Regards,
Vikas

Changes from v0->v1: 
      Updated the ABI version in file .../crypto/bcmfs/rte_pmd_bcmfs_version.map

Changes from v1->v2:
	- Fix compilation errors and coding style warnings.
	- Use global test crypto suite suggested by Adam Dybkowski

Changes from v2->v3:
	- Release notes updated.
	- bcmfs.rst updated with missing information about installation.
	- Review comments from patch1 from v2 addressed.
	- Updated description about dependency of PMD driver on VFIO_PRESENT.
	- Fixed typo in bcmfs_hw_defs.h (comments on patch3 from v2 addressed)
	- Comments on patch6 from v2 addressed and capability list is fixed.
		Removed redundant enums and macros from the file
		bcmfs_sym_defs.h and updated other impacted APIs accordingly.
		patch7 too is updated due to removal of redundancy.
	  Thanks! to Akhil for pointing out the redundancy.
	- Fix minor code style issues in few files as part of review.

Vikas Gupta (8):
  crypto/bcmfs: add BCMFS driver
  crypto/bcmfs: add vfio support
  crypto/bcmfs: add apis for queue pair management
  crypto/bcmfs: add hw queue pair operations
  crypto/bcmfs: create a symmetric cryptodev
  crypto/bcmfs: add session handling and capabilities
  crypto/bcmfs: add crypto h/w module
  crypto/bcmfs: add crypto pmd into cryptodev test

 MAINTAINERS                                   |    7 +
 app/test/test_cryptodev.c                     |   17 +
 app/test/test_cryptodev.h                     |    1 +
 doc/guides/cryptodevs/bcmfs.rst               |  109 ++
 doc/guides/cryptodevs/features/bcmfs.ini      |   56 +
 doc/guides/cryptodevs/index.rst               |    1 +
 doc/guides/rel_notes/release_20_11.rst        |    5 +
 drivers/crypto/bcmfs/bcmfs_dev_msg.h          |   29 +
 drivers/crypto/bcmfs/bcmfs_device.c           |  332 +++++
 drivers/crypto/bcmfs/bcmfs_device.h           |   76 ++
 drivers/crypto/bcmfs/bcmfs_hw_defs.h          |   32 +
 drivers/crypto/bcmfs/bcmfs_logs.c             |   38 +
 drivers/crypto/bcmfs/bcmfs_logs.h             |   34 +
 drivers/crypto/bcmfs/bcmfs_qp.c               |  383 ++++++
 drivers/crypto/bcmfs/bcmfs_qp.h               |  142 ++
 drivers/crypto/bcmfs/bcmfs_sym.c              |  289 +++++
 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c |  764 +++++++++++
 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h |   16 +
 drivers/crypto/bcmfs/bcmfs_sym_defs.h         |   34 +
 drivers/crypto/bcmfs/bcmfs_sym_engine.c       | 1155 +++++++++++++++++
 drivers/crypto/bcmfs/bcmfs_sym_engine.h       |  115 ++
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c          |  426 ++++++
 drivers/crypto/bcmfs/bcmfs_sym_pmd.h          |   38 +
 drivers/crypto/bcmfs/bcmfs_sym_req.h          |   62 +
 drivers/crypto/bcmfs/bcmfs_sym_session.c      |  282 ++++
 drivers/crypto/bcmfs/bcmfs_sym_session.h      |  109 ++
 drivers/crypto/bcmfs/bcmfs_vfio.c             |  107 ++
 drivers/crypto/bcmfs/bcmfs_vfio.h             |   17 +
 drivers/crypto/bcmfs/hw/bcmfs4_rm.c           |  743 +++++++++++
 drivers/crypto/bcmfs/hw/bcmfs5_rm.c           |  677 ++++++++++
 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c     |   82 ++
 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h     |   51 +
 drivers/crypto/bcmfs/meson.build              |   20 +
 .../crypto/bcmfs/rte_pmd_bcmfs_version.map    |    3 +
 drivers/crypto/meson.build                    |    1 +
 35 files changed, 6253 insertions(+)
 create mode 100644 doc/guides/cryptodevs/bcmfs.rst
 create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
 create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
 create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
 create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
 create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
 create mode 100644 drivers/crypto/bcmfs/meson.build
 create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map

-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v3 0/7] cmdline: support Windows
  @ 2020-10-05 15:33  0%   ` Olivier Matz
  0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2020-10-05 15:33 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: dev, Kinsella, Ray, Khoa To, Stephen Hemminger, Ferruh Yigit

Hi Dmitry,

On Tue, Sep 29, 2020 at 12:50:45AM +0300, Dmitry Kozlyuk wrote:
> This patchset enables librte_cmdline on Windows. To do that, it creates
> a number of wrappers for OS-dependent terminal handling and I/O.
> Considered alternative was to revive [1] and use libedit (Unix-only)
> for terminal handling. However, testing revealed that WinEditLine [2]
> is not a drop-in replacement for libedit, so this solution wouldn't be
> universal.
> 
> [1]: http://patchwork.dpdk.org/patch/38561
> [2]: http://mingweditline.sourceforge.net
> 
> v3:
>     * Add #ifdef workaround to keep API/ABI for Unices (Olivier).
>     * Fix missing cmdline_free() in test (Olivier).
>     * Rebase on ToT (Khoa).
> 
> Dmitry Kozlyuk (7):
>   cmdline: make implementation logically opaque
>   cmdline: add internal wrappers for terminal handling
>   cmdline: add internal wrappers for character input
>   cmdline: add internal wrapper for vdprintf
>   eal/windows: improve compatibility networking headers
>   cmdline: support Windows
>   examples/cmdline: build on Windows
> 
>  app/test-cmdline/commands.c                 |   8 +-
>  app/test/test_cmdline_lib.c                 |  44 ++---
>  config/meson.build                          |   2 +
>  doc/guides/rel_notes/deprecation.rst        |   4 +
>  examples/cmdline/commands.c                 |   1 -
>  examples/cmdline/main.c                     |   1 -
>  examples/meson.build                        |   6 +-
>  lib/librte_cmdline/cmdline.c                |  30 +--
>  lib/librte_cmdline/cmdline.h                |  18 +-
>  lib/librte_cmdline/cmdline_os_unix.c        |  53 +++++
>  lib/librte_cmdline/cmdline_os_windows.c     | 207 ++++++++++++++++++++
>  lib/librte_cmdline/cmdline_parse.c          |   5 +-
>  lib/librte_cmdline/cmdline_private.h        |  53 +++++
>  lib/librte_cmdline/cmdline_socket.c         |  25 +--
>  lib/librte_cmdline/cmdline_vt100.c          |   1 -
>  lib/librte_cmdline/cmdline_vt100.h          |   4 +
>  lib/librte_cmdline/meson.build              |   6 +
>  lib/librte_cmdline/rte_cmdline_version.map  |   8 +
>  lib/librte_eal/windows/include/arpa/inet.h  |  30 +++
>  lib/librte_eal/windows/include/netinet/in.h |  12 ++
>  lib/librte_eal/windows/include/sys/socket.h |  24 +++
>  lib/meson.build                             |   1 +
>  22 files changed, 475 insertions(+), 68 deletions(-)
>  create mode 100644 lib/librte_cmdline/cmdline_os_unix.c
>  create mode 100644 lib/librte_cmdline/cmdline_os_windows.c
>  create mode 100644 lib/librte_cmdline/cmdline_private.h
>  create mode 100644 lib/librte_eal/windows/include/arpa/inet.h
>  create mode 100644 lib/librte_eal/windows/include/sys/socket.h

For series:
Acked-by: Olivier Matz <olivier.matz@6wind.com>

Thanks!

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH V5 2/2] ethdev: change data type in TC rxq and TC txq
  @ 2020-10-05 12:26  0%       ` Ferruh Yigit
  2020-10-06 12:04  0%         ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-10-05 12:26 UTC (permalink / raw)
  To: Thomas Monjalon, Min Hu (Connor)
  Cc: techboard, stephen, bruce.richardson, jerinj, dev

On 9/28/2020 10:21 AM, Thomas Monjalon wrote:
> 28/09/2020 11:04, Ferruh Yigit:
>> On 9/27/2020 4:16 AM, Min Hu (Connor) wrote:
>>> From: Huisong Li <lihuisong@huawei.com>
>>>
>>> Currently, base and nb_queue in the tc_rxq and tc_txq information
>>> of queue and TC mapping on both TX and RX paths are uint8_t.
>>> However, these data will be truncated when queue number under a TC
>>> is greater than 256. So it is necessary for base and nb_queue to
>>> change from uint8_t to uint16_t.
> [...]
>>> --- a/lib/librte_ethdev/rte_ethdev.h
>>> +++ b/lib/librte_ethdev/rte_ethdev.h
>>>    struct rte_eth_dcb_tc_queue_mapping {
>>>    	/** rx queues assigned to tc per Pool */
>>>    	struct {
>>> -		uint8_t base;
>>> -		uint8_t nb_queue;
>>> +		uint16_t base;
>>> +		uint16_t nb_queue;
>>>    	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
>>>    	/** rx queues assigned to tc per Pool */
>>>    	struct {
>>> -		uint8_t base;
>>> -		uint8_t nb_queue;
>>> +		uint16_t base;
>>> +		uint16_t nb_queue;
>>>    	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
>>>    };
>>>    
>>>
>>
>> cc'ed tech-board,
>>
>> The patch breaks the ethdev ABI without a deprecation notice from previous
>> release(s).
>>
>> It is increasing the storage size of the fields to support more than 255 queues.
> 
> Yes queues are in 16-bit range.
> 
>> Since the ethdev library already heavily breaks the ABI this release, I am for
>> getting this patch, instead of waiting for one more year for the update.
>>
>> Can you please review the patch, is there any objection to proceed with it?
> 
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> 
> 

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

I will continue with this patch (not patchset) if there is no objection.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH V5 1/2] dpdk: resolve compiling errors for per-queue stats
  @ 2020-10-05 12:23  0%         ` Ferruh Yigit
  2020-10-06  8:33  0%           ` Olivier Matz
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-10-05 12:23 UTC (permalink / raw)
  To: Stephen Hemminger, Thomas Monjalon
  Cc: Min Hu (Connor), techboard, bruce.richardson, jerinj, Ray Kinsella, dev

On 9/28/2020 4:43 PM, Stephen Hemminger wrote:
> On Mon, 28 Sep 2020 17:24:26 +0200
> Thomas Monjalon <thomas@monjalon.net> wrote:
> 
>> 28/09/2020 15:53, Ferruh Yigit:
>>> On 9/28/2020 10:16 AM, Thomas Monjalon wrote:
>>>> 28/09/2020 10:59, Ferruh Yigit:
>>>>> On 9/27/2020 4:16 AM, Min Hu (Connor) wrote:
>>>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>>>
>>>>>> Currently, only statistics of rx/tx queues with queue_id less than
>>>>>> RTE_ETHDEV_QUEUE_STAT_CNTRS can be displayed. If there is a certain
>>>>>> application scenario that it needs to use 256 or more than 256 queues
>>>>>> and display all statistics of rx/tx queue. At this moment, we have to
>>>>>> change the macro to be equaled to the queue number.
>>>>>>
>>>>>> However, modifying the macro to be greater than 256 will trigger
>>>>>> many errors and warnings from test-pmd, PMD drivers and librte_ethdev
>>>>>> during compiling dpdk project. But it is possible and permitted that
>>>>>> rx/tx queue number is greater than 256 and all statistics of rx/tx
>>>>>> queue need to be displayed. In addition, the data type of rx/tx queue
>>>>>> number in rte_eth_dev_configure API is 'uint16_t'. So It is unreasonable
>>>>>> to use the 'uint8_t' type for variables that control which per-queue
>>>>>> statistics can be displayed.
>>>>
>>>> The explanation is too much complex and misleading.
>>>> You mean you cannot increase RTE_ETHDEV_QUEUE_STAT_CNTRS
>>>> above 256 because it is an 8-bit type?
>>>>
>>>> [...]
>>>>>> --- a/lib/librte_ethdev/rte_ethdev.h
>>>>>> +++ b/lib/librte_ethdev/rte_ethdev.h
>>>>>>     int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id,
>>>>>> -		uint16_t tx_queue_id, uint8_t stat_idx);
>>>>>> +		uint16_t tx_queue_id, uint16_t stat_idx);
>>>> [...]
>>>>>>     int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
>>>>>>     					   uint16_t rx_queue_id,
>>>>>> -					   uint8_t stat_idx);
>>>>>> +					   uint16_t stat_idx);
>>>> [...]
>>>>> cc'ed tech-board,
>>>>>
>>>>> The patch breaks the ethdev ABI without a deprecation notice from previous
>>>>> release(s).
>>>>>
>>>>> It is mainly a fix to the port_id storage type, which we have updated from
>>>>> uint8_t to uint16_t in past but some seems remained for
>>>>> 'rte_eth_dev_set_tx_queue_stats_mapping()' &
>>>>> 'rte_eth_dev_set_rx_queue_stats_mapping()' APIs.
>>>>
>>>> No, it is not related to the port id, but the number of limited stats.
>>>>    
>>>
>>> Right, it is not related to the port id, it is fixing the storage type for index
>>> used to map the queue stats.
>>>    
>>>>> Since the ethdev library already heavily breaks the ABI this release, I am for
>>>>> getting this fix, instead of waiting the fix for one more year.
>>>>
>>>> If stats can be managed for more than 256 queues, I think it means
>>>> it is not limited. In this case, we probably don't need the API
>>>> *_queue_stats_mapping which was invented for a limitation of ixgbe.
>>>>
>>>> The problem is probably somewhere else (in testpmd),
>>>> that's why I am against this patch.
>>>>    
>>>
>>> This patch is not to fix queue stats mapping, I agree there are problems related
>>> to it, already shared as comment to this set.
>>>
>>> But this patch is to fix the build errors when 'RTE_ETHDEV_QUEUE_STAT_CNTRS'
>>> needs to set more than 255. Where the build errors seems around the
>>> stats_mapping APIs.
>>
>> It is not said this API is supposed to manage more than 256 queues mapping.
>> In general we should not need this API.
>> I think it is solving the wrong problem.
> 
> 
> The original API is a band aid for the limited number of statistics counters
> in the Intel IXGBE hardware. It crept into to the DPDK as an API. I would rather
> have per-queue statistics and make ixgbe say "not supported"
> 

The current issue is not directly related to '*_queue_stats_mapping' APIs.

Problem is not able to set 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255.
User may need to set the 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, since it is used 
to define size of the stats counter.
"uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];"

When 'RTE_ETHDEV_QUEUE_STAT_CNTRS' > 255, it gives multiple build errors, the 
one in the ethdev is like [1].

This can be fixed two ways,
a) increase the size of 'stat_idx' storage type to u16 in the 
'*_queue_stats_mapping' APIs, this is what this patch does.
b) Fix with a casting in the comparison, without changing the APIs.

I think both are OK, but is (b) more preferable?


[1]
../lib/librte_ethdev/rte_ethdev.c: In function ‘set_queue_stats_mapping’:
../lib/librte_ethdev/rte_ethdev.c:2943:15: warning: comparison is always false 
due to limited range of data type [-Wtype-limits]
  2943 |  if (stat_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)
       |               ^~

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4] kernel/linux: remove igb_uio
    2020-10-05  9:38  2% ` [dpdk-dev] [PATCH v3] " Thomas Monjalon
@ 2020-10-05  9:42  2% ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-10-05  9:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, ferruh.yigit, jerinj, stephen, Nicolas Chautru,
	Somalapuram Amaranath, John Griffin, Fiona Trahe,
	Deepak Kumar Jain, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ray Kinsella, Neil Horman,
	Anatoly Burakov

As decided in the Technical Board in November 2019,
the kernel module igb_uio is moved to the dpdk-kmods repository
in the /linux/igb_uio/ directory.

Minutes of Technical Board meeting:
https://mails.dpdk.org/archives/dev/2019-November/151763.html

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
v4: Cc more maintainers
v3: update more docs and provide a link to the new repo
v2: update few docs (including release notes)
---
 MAINTAINERS                                |   1 -
 doc/guides/bbdevs/fpga_5gnr_fec.rst        |   3 +-
 doc/guides/bbdevs/fpga_lte_fec.rst         |   3 +-
 doc/guides/cryptodevs/ccp.rst              |   3 +-
 doc/guides/cryptodevs/qat.rst              |   3 +-
 doc/guides/howto/lm_bond_virtio_sriov.rst  |   2 +-
 doc/guides/howto/lm_virtio_vhost_user.rst  |   2 +-
 doc/guides/howto/openwrt.rst               |   5 -
 doc/guides/linux_gsg/enable_func.rst       |   3 +-
 doc/guides/linux_gsg/linux_drivers.rst     |  23 +-
 doc/guides/nics/build_and_test.rst         |   2 +-
 doc/guides/nics/ena.rst                    |   4 +-
 doc/guides/rel_notes/deprecation.rst       |   7 -
 doc/guides/rel_notes/release_20_11.rst     |   4 +-
 doc/guides/sample_app_ug/multi_process.rst |   2 -
 drivers/bus/pci/bsd/pci.c                  |   2 +-
 kernel/linux/igb_uio/Kbuild                |   2 -
 kernel/linux/igb_uio/compat.h              | 154 -----
 kernel/linux/igb_uio/igb_uio.c             | 660 ---------------------
 kernel/linux/igb_uio/meson.build           |  20 -
 kernel/linux/meson.build                   |   2 +-
 21 files changed, 21 insertions(+), 886 deletions(-)
 delete mode 100644 kernel/linux/igb_uio/Kbuild
 delete mode 100644 kernel/linux/igb_uio/compat.h
 delete mode 100644 kernel/linux/igb_uio/igb_uio.c
 delete mode 100644 kernel/linux/igb_uio/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 681093d949..f15eec0c35 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -290,7 +290,6 @@ F: doc/guides/linux_gsg/
 
 Linux UIO
 M: Ferruh Yigit <ferruh.yigit@intel.com>
-F: kernel/linux/igb_uio/
 F: drivers/bus/pci/linux/*uio*
 
 Linux VFIO
diff --git a/doc/guides/bbdevs/fpga_5gnr_fec.rst b/doc/guides/bbdevs/fpga_5gnr_fec.rst
index 6760391e8c..709e7baed9 100644
--- a/doc/guides/bbdevs/fpga_5gnr_fec.rst
+++ b/doc/guides/bbdevs/fpga_5gnr_fec.rst
@@ -93,8 +93,7 @@ the UIO driver by repeating this command for every function.
 
 .. code-block:: console
 
-  cd <dpdk-top-level-directory>
-  insmod ./build/kmod/igb_uio.ko
+  insmod igb_uio.ko
   echo "8086 0d8f" > /sys/bus/pci/drivers/igb_uio/new_id
   lspci -vd8086:0d8f
 
diff --git a/doc/guides/bbdevs/fpga_lte_fec.rst b/doc/guides/bbdevs/fpga_lte_fec.rst
index fdc8a76981..344a2cc06a 100644
--- a/doc/guides/bbdevs/fpga_lte_fec.rst
+++ b/doc/guides/bbdevs/fpga_lte_fec.rst
@@ -92,8 +92,7 @@ the UIO driver by repeating this command for every function.
 
 .. code-block:: console
 
-  cd <dpdk-top-level-directory>
-  insmod ./build/kmod/igb_uio.ko
+  insmod igb_uio.ko
   echo "1172 5052" > /sys/bus/pci/drivers/igb_uio/new_id
   lspci -vd1172:
 
diff --git a/doc/guides/cryptodevs/ccp.rst b/doc/guides/cryptodevs/ccp.rst
index a43fe92de9..9c1997768a 100644
--- a/doc/guides/cryptodevs/ccp.rst
+++ b/doc/guides/cryptodevs/ccp.rst
@@ -75,9 +75,8 @@ Initialization
 Bind the CCP devices to DPDK UIO driver module before running the CCP PMD stack.
 e.g. for the 0x1456 device::
 
-	cd to the top-level DPDK directory
 	modprobe uio
-	insmod ./build/kmod/igb_uio.ko
+	insmod igb_uio.ko
 	echo "1022 1456" > /sys/bus/pci/drivers/igb_uio/new_id
 
 Another way to bind the CCP devices to DPDK UIO driver is by using the ``dpdk-devbind.py`` script.
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index e5d2cf4997..7c56293192 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -642,9 +642,8 @@ Install the DPDK igb_uio driver, bind the VF PCI Device id to it and use lspci
 to confirm the VF devices are now in use by igb_uio kernel driver,
 e.g. for the C62x device::
 
-    cd to the top-level DPDK directory
     modprobe uio
-    insmod ./build/kmod/igb_uio.ko
+    insmod igb_uio.ko
     echo "8086 37c9" > /sys/bus/pci/drivers/igb_uio/new_id
     lspci -vvd:37c9
 
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index 02ba1cdf5d..16d86d122c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -591,7 +591,7 @@ Set up DPDK in the Virtual Machine
    rmmod virtio-pci ixgbevf
 
    modprobe uio
-   insmod /root/dpdk/<build_dir>/kernel/linux/igb_uio/igb_uio.ko
+   insmod igb_uio.ko
 
    /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
    /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
diff --git a/doc/guides/howto/lm_virtio_vhost_user.rst b/doc/guides/howto/lm_virtio_vhost_user.rst
index 330ff5a9c8..e495ac976e 100644
--- a/doc/guides/howto/lm_virtio_vhost_user.rst
+++ b/doc/guides/howto/lm_virtio_vhost_user.rst
@@ -421,7 +421,7 @@ setup_dpdk_virtio_in_vm.sh
    rmmod virtio-pci
 
    modprobe uio
-   insmod /root/dpdk/<build_dir>/kernel/linux/igb_uio/igb_uio.ko
+   insmod igb_uio.ko
 
    /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
    /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
diff --git a/doc/guides/howto/openwrt.rst b/doc/guides/howto/openwrt.rst
index 6081f057be..e1d7db2a90 100644
--- a/doc/guides/howto/openwrt.rst
+++ b/doc/guides/howto/openwrt.rst
@@ -103,11 +103,6 @@ first.
     meson builddir --cross-file openwrt-cross
     ninja -C builddir
 
-.. note::
-
-    For compiling the igb_uio with the kernel version used in target machine,
-    you need to explicitly specify kernel_dir in meson_options.txt.
-
 Running DPDK application on OpenWrt
 -----------------------------------
 
diff --git a/doc/guides/linux_gsg/enable_func.rst b/doc/guides/linux_gsg/enable_func.rst
index 06c17e4058..aab32252ea 100644
--- a/doc/guides/linux_gsg/enable_func.rst
+++ b/doc/guides/linux_gsg/enable_func.rst
@@ -155,4 +155,5 @@ This results in pass-through of the DMAR (DMA Remapping) lookup in the host.
 Also, if ``INTEL_IOMMU_DEFAULT_ON`` is not set in the kernel, the ``intel_iommu=on`` kernel parameter must be used too.
 This ensures that the Intel IOMMU is being initialized as expected.
 
-Please note that while using ``iommu=pt`` is compulsory for ``igb_uio driver``, the ``vfio-pci`` driver can actually work with both ``iommu=pt`` and ``iommu=on``.
+Please note that while using ``iommu=pt`` is compulsory for ``igb_uio`` driver,
+the ``vfio-pci`` driver can actually work with both ``iommu=pt`` and ``iommu=on``.
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 7789c572bb..080b44955a 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -27,33 +27,20 @@ can provide the uio capability. This module can be loaded using the command:
 
     ``uio_pci_generic`` module doesn't support the creation of virtual functions.
 
-As an alternative to the ``uio_pci_generic``, the DPDK also includes the igb_uio
-module which can be found in the kernel/linux subdirectory referred to above. It can
-be loaded as shown below:
+As an alternative to the ``uio_pci_generic``, there is the ``igb_uio`` module
+which can be found in the repository `dpdk-kmods <http://git.dpdk.org/dpdk-kmods>`_.
+It can be loaded as shown below:
 
 .. code-block:: console
 
     sudo modprobe uio
-    sudo insmod <build_dir>/kernel/linux/igb_uio/igb_uio.ko
-
-.. note::
-
-   Building DPDK Linux kernel modules is disabled by default starting from DPDK 20.02.
-   To enable them again, the config option "enable_kmods" needs to be set
-   in the meson build configuration.
-   See :ref:`adjusting_build_options` for details on how to set/clear build options.
-   It is planned to move ``igb_uio`` module to a different git repository.
-
-.. note::
-
-    For some devices which lack support for legacy interrupts, e.g. virtual function
-    (VF) devices, the ``igb_uio`` module may be needed in place of ``uio_pci_generic``.
+    sudo insmod igb_uio.ko
 
 .. note::
 
    If UEFI secure boot is enabled, the Linux kernel may disallow the use of
    UIO on the system. Therefore, devices for use by DPDK should be bound to the
-   ``vfio-pci`` kernel module rather than ``igb_uio`` or ``uio_pci_generic``.
+   ``vfio-pci`` kernel module rather than any UIO-based module.
    For more details see :ref:`linux_gsg_binding_kernel` below.
 
 .. note::
diff --git a/doc/guides/nics/build_and_test.rst b/doc/guides/nics/build_and_test.rst
index 3138c0f880..ba196382a9 100644
--- a/doc/guides/nics/build_and_test.rst
+++ b/doc/guides/nics/build_and_test.rst
@@ -69,7 +69,7 @@ This section demonstrates how to setup and run ``testpmd`` in Linux.
    .. code-block:: console
 
       modprobe uio
-      insmod ./x86_64-native-linux-gcc/kmod/igb_uio.ko
+      insmod igb_uio.ko
 
    or
 
diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst
index bec97c3326..3a6074cdf6 100644
--- a/doc/guides/nics/ena.rst
+++ b/doc/guides/nics/ena.rst
@@ -169,8 +169,8 @@ Prerequisites
    (*) ENAv2 hardware supports Low Latency Queue v2 (LLQv2). This feature
    reduces the latency of the packets by pushing the header directly through
    the PCI to the device, before the DMA is even triggered. For proper work
-   kernel PCI driver must support write combining (WC). In mainline version of
-   ``igb_uio`` (in DPDK repo) it must be enabled by loading module with
+   kernel PCI driver must support write combining (WC).
+   In DPDK ``igb_uio`` it must be enabled by loading module with
    ``wc_activate=1`` flag (example below). However, mainline's vfio-pci
    driver in kernel doesn't have WC support yet (planed to be added).
    If vfio-pci used user should be either turn off ENAv2 (to avoid performance
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0be208edca..2d2a7d20b6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -78,13 +78,6 @@ Deprecation Notices
   These wrappers must be used for patches that need to be merged in 20.08
   onwards. This change will not introduce any performance degradation.
 
-* igb_uio: In the view of reducing the kernel dependency from the main tree,
-  as a first step, the Technical Board decided to move ``igb_uio``
-  kernel module to the dpdk-kmods repository in the /linux/igb_uio/ directory
-  in 20.11.
-  Minutes of Technical Board Meeting of `2019-11-06
-  <https://mails.dpdk.org/archives/dev/2019-November/151763.html>`_.
-
 * lib: will fix extending some enum/define breaking the ABI. There are multiple
   samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
   used by iterators, and arrays holding these values are sized with this
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4e61431c6c..243bd940a4 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -122,9 +122,11 @@ Removed Items
 
 * vhost: Dequeue zero-copy support has been removed.
 
+* kernel: The module ``igb_uio`` has been moved to the git repository
+``dpdk-kmods`` in a new directory ``linux/igb_uio``.
+
 * Removed Python 2 support since it was EOL'd in January 2020.
 
-
 API Changes
 -----------
 
diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
index f2a79a6397..bd329c2db2 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -64,8 +64,6 @@ The process should start successfully and display a command prompt as follows:
     EAL: Virtual area found at 0x7ff200000000 (size = 0x40000000)
     ...
 
-    EAL: check igb_uio module
-    EAL: check module finished
     EAL: Master core 0 is ready (tid=54e41820)
     EAL: Core 1 is ready (tid=53b32700)
 
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index 2ed8261349..97c611737a 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -50,7 +50,7 @@
  * This code is used to simulate a PCI probe by parsing information in
  * sysfs. Moreover, when a registered driver matches a device, the
  * kernel driver currently using it is unloaded and replaced by
- * igb_uio module, which is a very minimal userland driver for Intel
+ * nic_uio module, which is a very minimal userland driver for Intel
  * network card, only providing access to PCI BAR to applications, and
  * enabling bus master.
  */
diff --git a/kernel/linux/igb_uio/Kbuild b/kernel/linux/igb_uio/Kbuild
deleted file mode 100644
index 3ab85c4116..0000000000
diff --git a/kernel/linux/igb_uio/compat.h b/kernel/linux/igb_uio/compat.h
deleted file mode 100644
index 8dbb896ae1..0000000000
diff --git a/kernel/linux/igb_uio/igb_uio.c b/kernel/linux/igb_uio/igb_uio.c
deleted file mode 100644
index 039f5a5f63..0000000000
diff --git a/kernel/linux/igb_uio/meson.build b/kernel/linux/igb_uio/meson.build
deleted file mode 100644
index 80540aecee..0000000000
diff --git a/kernel/linux/meson.build b/kernel/linux/meson.build
index da79df1687..5c864a4653 100644
--- a/kernel/linux/meson.build
+++ b/kernel/linux/meson.build
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
-subdirs = ['igb_uio', 'kni']
+subdirs = ['kni']
 
 # if we are cross-compiling we need kernel_dir specified
 if get_option('kernel_dir') == '' and meson.is_cross_build()
-- 
2.28.0


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v3] kernel: remove igb_uio
  @ 2020-10-05  9:38  2% ` Thomas Monjalon
  2020-10-05  9:42  2% ` [dpdk-dev] [PATCH v4] kernel/linux: " Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-10-05  9:38 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, ferruh.yigit, jerinj, stephen

As decided in the Technical Board in November 2019,
the kernel module igb_uio is moved to the dpdk-kmods repository
in the /linux/igb_uio/ directory.

Minutes of Technical Board meeting:
https://mails.dpdk.org/archives/dev/2019-November/151763.html

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
v3: update more docs and provide a link to the new repo
v2: update few docs (including release notes)
---
 MAINTAINERS                                |   1 -
 doc/guides/bbdevs/fpga_5gnr_fec.rst        |   3 +-
 doc/guides/bbdevs/fpga_lte_fec.rst         |   3 +-
 doc/guides/cryptodevs/ccp.rst              |   3 +-
 doc/guides/cryptodevs/qat.rst              |   3 +-
 doc/guides/howto/lm_bond_virtio_sriov.rst  |   2 +-
 doc/guides/howto/lm_virtio_vhost_user.rst  |   2 +-
 doc/guides/howto/openwrt.rst               |   5 -
 doc/guides/linux_gsg/enable_func.rst       |   3 +-
 doc/guides/linux_gsg/linux_drivers.rst     |  23 +-
 doc/guides/nics/build_and_test.rst         |   2 +-
 doc/guides/nics/ena.rst                    |   4 +-
 doc/guides/rel_notes/deprecation.rst       |   7 -
 doc/guides/rel_notes/release_20_11.rst     |   4 +-
 doc/guides/sample_app_ug/multi_process.rst |   2 -
 drivers/bus/pci/bsd/pci.c                  |   2 +-
 kernel/linux/igb_uio/Kbuild                |   2 -
 kernel/linux/igb_uio/compat.h              | 154 -----
 kernel/linux/igb_uio/igb_uio.c             | 660 ---------------------
 kernel/linux/igb_uio/meson.build           |  20 -
 kernel/linux/meson.build                   |   2 +-
 21 files changed, 21 insertions(+), 886 deletions(-)
 delete mode 100644 kernel/linux/igb_uio/Kbuild
 delete mode 100644 kernel/linux/igb_uio/compat.h
 delete mode 100644 kernel/linux/igb_uio/igb_uio.c
 delete mode 100644 kernel/linux/igb_uio/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 681093d949..f15eec0c35 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -290,7 +290,6 @@ F: doc/guides/linux_gsg/
 
 Linux UIO
 M: Ferruh Yigit <ferruh.yigit@intel.com>
-F: kernel/linux/igb_uio/
 F: drivers/bus/pci/linux/*uio*
 
 Linux VFIO
diff --git a/doc/guides/bbdevs/fpga_5gnr_fec.rst b/doc/guides/bbdevs/fpga_5gnr_fec.rst
index 6760391e8c..709e7baed9 100644
--- a/doc/guides/bbdevs/fpga_5gnr_fec.rst
+++ b/doc/guides/bbdevs/fpga_5gnr_fec.rst
@@ -93,8 +93,7 @@ the UIO driver by repeating this command for every function.
 
 .. code-block:: console
 
-  cd <dpdk-top-level-directory>
-  insmod ./build/kmod/igb_uio.ko
+  insmod igb_uio.ko
   echo "8086 0d8f" > /sys/bus/pci/drivers/igb_uio/new_id
   lspci -vd8086:0d8f
 
diff --git a/doc/guides/bbdevs/fpga_lte_fec.rst b/doc/guides/bbdevs/fpga_lte_fec.rst
index fdc8a76981..344a2cc06a 100644
--- a/doc/guides/bbdevs/fpga_lte_fec.rst
+++ b/doc/guides/bbdevs/fpga_lte_fec.rst
@@ -92,8 +92,7 @@ the UIO driver by repeating this command for every function.
 
 .. code-block:: console
 
-  cd <dpdk-top-level-directory>
-  insmod ./build/kmod/igb_uio.ko
+  insmod igb_uio.ko
   echo "1172 5052" > /sys/bus/pci/drivers/igb_uio/new_id
   lspci -vd1172:
 
diff --git a/doc/guides/cryptodevs/ccp.rst b/doc/guides/cryptodevs/ccp.rst
index a43fe92de9..9c1997768a 100644
--- a/doc/guides/cryptodevs/ccp.rst
+++ b/doc/guides/cryptodevs/ccp.rst
@@ -75,9 +75,8 @@ Initialization
 Bind the CCP devices to DPDK UIO driver module before running the CCP PMD stack.
 e.g. for the 0x1456 device::
 
-	cd to the top-level DPDK directory
 	modprobe uio
-	insmod ./build/kmod/igb_uio.ko
+	insmod igb_uio.ko
 	echo "1022 1456" > /sys/bus/pci/drivers/igb_uio/new_id
 
 Another way to bind the CCP devices to DPDK UIO driver is by using the ``dpdk-devbind.py`` script.
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index e5d2cf4997..7c56293192 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -642,9 +642,8 @@ Install the DPDK igb_uio driver, bind the VF PCI Device id to it and use lspci
 to confirm the VF devices are now in use by igb_uio kernel driver,
 e.g. for the C62x device::
 
-    cd to the top-level DPDK directory
     modprobe uio
-    insmod ./build/kmod/igb_uio.ko
+    insmod igb_uio.ko
     echo "8086 37c9" > /sys/bus/pci/drivers/igb_uio/new_id
     lspci -vvd:37c9
 
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index 02ba1cdf5d..16d86d122c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -591,7 +591,7 @@ Set up DPDK in the Virtual Machine
    rmmod virtio-pci ixgbevf
 
    modprobe uio
-   insmod /root/dpdk/<build_dir>/kernel/linux/igb_uio/igb_uio.ko
+   insmod igb_uio.ko
 
    /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
    /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
diff --git a/doc/guides/howto/lm_virtio_vhost_user.rst b/doc/guides/howto/lm_virtio_vhost_user.rst
index 330ff5a9c8..e495ac976e 100644
--- a/doc/guides/howto/lm_virtio_vhost_user.rst
+++ b/doc/guides/howto/lm_virtio_vhost_user.rst
@@ -421,7 +421,7 @@ setup_dpdk_virtio_in_vm.sh
    rmmod virtio-pci
 
    modprobe uio
-   insmod /root/dpdk/<build_dir>/kernel/linux/igb_uio/igb_uio.ko
+   insmod igb_uio.ko
 
    /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
    /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
diff --git a/doc/guides/howto/openwrt.rst b/doc/guides/howto/openwrt.rst
index 6081f057be..e1d7db2a90 100644
--- a/doc/guides/howto/openwrt.rst
+++ b/doc/guides/howto/openwrt.rst
@@ -103,11 +103,6 @@ first.
     meson builddir --cross-file openwrt-cross
     ninja -C builddir
 
-.. note::
-
-    For compiling the igb_uio with the kernel version used in target machine,
-    you need to explicitly specify kernel_dir in meson_options.txt.
-
 Running DPDK application on OpenWrt
 -----------------------------------
 
diff --git a/doc/guides/linux_gsg/enable_func.rst b/doc/guides/linux_gsg/enable_func.rst
index 06c17e4058..aab32252ea 100644
--- a/doc/guides/linux_gsg/enable_func.rst
+++ b/doc/guides/linux_gsg/enable_func.rst
@@ -155,4 +155,5 @@ This results in pass-through of the DMAR (DMA Remapping) lookup in the host.
 Also, if ``INTEL_IOMMU_DEFAULT_ON`` is not set in the kernel, the ``intel_iommu=on`` kernel parameter must be used too.
 This ensures that the Intel IOMMU is being initialized as expected.
 
-Please note that while using ``iommu=pt`` is compulsory for ``igb_uio driver``, the ``vfio-pci`` driver can actually work with both ``iommu=pt`` and ``iommu=on``.
+Please note that while using ``iommu=pt`` is compulsory for ``igb_uio`` driver,
+the ``vfio-pci`` driver can actually work with both ``iommu=pt`` and ``iommu=on``.
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 7789c572bb..080b44955a 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -27,33 +27,20 @@ can provide the uio capability. This module can be loaded using the command:
 
     ``uio_pci_generic`` module doesn't support the creation of virtual functions.
 
-As an alternative to the ``uio_pci_generic``, the DPDK also includes the igb_uio
-module which can be found in the kernel/linux subdirectory referred to above. It can
-be loaded as shown below:
+As an alternative to the ``uio_pci_generic``, there is the ``igb_uio`` module
+which can be found in the repository `dpdk-kmods <http://git.dpdk.org/dpdk-kmods>`_.
+It can be loaded as shown below:
 
 .. code-block:: console
 
     sudo modprobe uio
-    sudo insmod <build_dir>/kernel/linux/igb_uio/igb_uio.ko
-
-.. note::
-
-   Building DPDK Linux kernel modules is disabled by default starting from DPDK 20.02.
-   To enable them again, the config option "enable_kmods" needs to be set
-   in the meson build configuration.
-   See :ref:`adjusting_build_options` for details on how to set/clear build options.
-   It is planned to move ``igb_uio`` module to a different git repository.
-
-.. note::
-
-    For some devices which lack support for legacy interrupts, e.g. virtual function
-    (VF) devices, the ``igb_uio`` module may be needed in place of ``uio_pci_generic``.
+    sudo insmod igb_uio.ko
 
 .. note::
 
    If UEFI secure boot is enabled, the Linux kernel may disallow the use of
    UIO on the system. Therefore, devices for use by DPDK should be bound to the
-   ``vfio-pci`` kernel module rather than ``igb_uio`` or ``uio_pci_generic``.
+   ``vfio-pci`` kernel module rather than any UIO-based module.
    For more details see :ref:`linux_gsg_binding_kernel` below.
 
 .. note::
diff --git a/doc/guides/nics/build_and_test.rst b/doc/guides/nics/build_and_test.rst
index 3138c0f880..ba196382a9 100644
--- a/doc/guides/nics/build_and_test.rst
+++ b/doc/guides/nics/build_and_test.rst
@@ -69,7 +69,7 @@ This section demonstrates how to setup and run ``testpmd`` in Linux.
    .. code-block:: console
 
       modprobe uio
-      insmod ./x86_64-native-linux-gcc/kmod/igb_uio.ko
+      insmod igb_uio.ko
 
    or
 
diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst
index bec97c3326..3a6074cdf6 100644
--- a/doc/guides/nics/ena.rst
+++ b/doc/guides/nics/ena.rst
@@ -169,8 +169,8 @@ Prerequisites
    (*) ENAv2 hardware supports Low Latency Queue v2 (LLQv2). This feature
    reduces the latency of the packets by pushing the header directly through
    the PCI to the device, before the DMA is even triggered. For proper work
-   kernel PCI driver must support write combining (WC). In mainline version of
-   ``igb_uio`` (in DPDK repo) it must be enabled by loading module with
+   kernel PCI driver must support write combining (WC).
+   In DPDK ``igb_uio`` it must be enabled by loading module with
    ``wc_activate=1`` flag (example below). However, mainline's vfio-pci
    driver in kernel doesn't have WC support yet (planed to be added).
    If vfio-pci used user should be either turn off ENAv2 (to avoid performance
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0be208edca..2d2a7d20b6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -78,13 +78,6 @@ Deprecation Notices
   These wrappers must be used for patches that need to be merged in 20.08
   onwards. This change will not introduce any performance degradation.
 
-* igb_uio: In the view of reducing the kernel dependency from the main tree,
-  as a first step, the Technical Board decided to move ``igb_uio``
-  kernel module to the dpdk-kmods repository in the /linux/igb_uio/ directory
-  in 20.11.
-  Minutes of Technical Board Meeting of `2019-11-06
-  <https://mails.dpdk.org/archives/dev/2019-November/151763.html>`_.
-
 * lib: will fix extending some enum/define breaking the ABI. There are multiple
   samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
   used by iterators, and arrays holding these values are sized with this
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4e61431c6c..243bd940a4 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -122,9 +122,11 @@ Removed Items
 
 * vhost: Dequeue zero-copy support has been removed.
 
+* kernel: The module ``igb_uio`` has been moved to the git repository
+``dpdk-kmods`` in a new directory ``linux/igb_uio``.
+
 * Removed Python 2 support since it was EOL'd in January 2020.
 
-
 API Changes
 -----------
 
diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
index f2a79a6397..bd329c2db2 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -64,8 +64,6 @@ The process should start successfully and display a command prompt as follows:
     EAL: Virtual area found at 0x7ff200000000 (size = 0x40000000)
     ...
 
-    EAL: check igb_uio module
-    EAL: check module finished
     EAL: Master core 0 is ready (tid=54e41820)
     EAL: Core 1 is ready (tid=53b32700)
 
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index 2ed8261349..97c611737a 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -50,7 +50,7 @@
  * This code is used to simulate a PCI probe by parsing information in
  * sysfs. Moreover, when a registered driver matches a device, the
  * kernel driver currently using it is unloaded and replaced by
- * igb_uio module, which is a very minimal userland driver for Intel
+ * nic_uio module, which is a very minimal userland driver for Intel
  * network card, only providing access to PCI BAR to applications, and
  * enabling bus master.
  */
diff --git a/kernel/linux/igb_uio/Kbuild b/kernel/linux/igb_uio/Kbuild
deleted file mode 100644
index 3ab85c4116..0000000000
diff --git a/kernel/linux/igb_uio/compat.h b/kernel/linux/igb_uio/compat.h
deleted file mode 100644
index 8dbb896ae1..0000000000
diff --git a/kernel/linux/igb_uio/igb_uio.c b/kernel/linux/igb_uio/igb_uio.c
deleted file mode 100644
index 039f5a5f63..0000000000
diff --git a/kernel/linux/igb_uio/meson.build b/kernel/linux/igb_uio/meson.build
deleted file mode 100644
index 80540aecee..0000000000
diff --git a/kernel/linux/meson.build b/kernel/linux/meson.build
index da79df1687..5c864a4653 100644
--- a/kernel/linux/meson.build
+++ b/kernel/linux/meson.build
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
-subdirs = ['igb_uio', 'kni']
+subdirs = ['kni']
 
 # if we are cross-compiling we need kernel_dir specified
 if get_option('kernel_dir') == '' and meson.is_cross_build()
-- 
2.28.0


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v2] meter: remove experimental alias
  @ 2020-10-05  9:33  0%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-10-05  9:33 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Cristian Dumitrescu, Ray Kinsella, Neil Horman, dev

On Mon, Aug 17, 2020 at 12:22 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> Remove ABI versioning for APIs:
> 'rte_meter_trtcm_rfc4115_profile_config()'
> 'rte_meter_trtcm_rfc4115_config()'
>
> The alias was introduced in
> commit 60197bda97a0 ("meter: provide experimental alias for matured API")
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>

Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

Applied, thanks Ferruh.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] ipsec: remove experimental tag
  @ 2020-10-05  8:59  0%     ` Kinsella, Ray
  2020-10-06 20:11  0%       ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-10-05  8:59 UTC (permalink / raw)
  To: dev



On 16/09/2020 12:22, Ananyev, Konstantin wrote:
> 
>> Since librte_ipsec was first introduced in 19.02 and there were no changes
>> in it's public API since 19.11, it should be considered mature enough to
>> remove the 'experimental' tag from it.
>> The RTE_SATP_LOG2_NUM enum is also being dropped from rte_ipsec_sa.h to
>> avoid possible ABI problems in the future.
>>
>> ---
> 
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
>> 2.25.1
> 

Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 10/11] doc: update release notes for MLX5 L3 frag support
  2020-10-05  8:35  3%   ` [dpdk-dev] [PATCH v3 00/11] support match on L3 fragmented packets Dekel Peled
@ 2020-10-05  8:35  8%     ` Dekel Peled
  2020-10-07 10:53  3%     ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Dekel Peled
  1 sibling, 0 replies; 200+ results
From: Dekel Peled @ 2020-10-05  8:35 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This patch updates 20.11 release notes with the changes included in
patches of this series:
1) MLX5 support of matching on IPv4/IPv6 fragmented/non-fragmented
   packets.
2) ABI change in ethdev struct rte_flow_item_ipv6.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 1a9945f..8a244fe 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -109,6 +109,11 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on fragmented/non-fragmented IPv4/IPv6 packets.
 
 Removed Items
 -------------
@@ -238,6 +243,11 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * Added extensions' attributes to struct ``rte_flow_item_ipv6``.
+    A set of additional values added to struct, indicating the existence of
+    every defined extension header type.
+    Applications should use the new values for identification of existing
+    extensions in the packet header.
 
 Known Issues
 ------------
-- 
1.8.3.1


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v3 00/11] support match on L3 fragmented packets
      2020-10-01 21:15  8%   ` [dpdk-dev] [PATCH v2 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
@ 2020-10-05  8:35  3%   ` Dekel Peled
  2020-10-05  8:35  8%     ` [dpdk-dev] [PATCH v3 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
  2020-10-07 10:53  3%     ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Dekel Peled
  2 siblings, 2 replies; 200+ results
From: Dekel Peled @ 2020-10-05  8:35 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This series implements support of matching on packets based on the
fragmentation attribute of the packet, i.e. if packet is a fragment
of a larger packet, or the opposite - packet is not a fragment.

In ethdev, add API to support IPv6 extension headers, and specifically
the IPv6 fragment extension header item.
In MLX5 PMD, support match on IPv4 fragmented packets, IPv6 fragmented
packets, and IPv6 fragment extension header item.
Testpmd CLI is updated accordingly.
Documentation is updated accordingly.

---
v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid ABI breakage.
---

Dekel Peled (11):
  ethdev: add extensions attributes to IPv6 item
  ethdev: add IPv6 fragment extension header item
  app/testpmd: support IPv4 fragments
  app/testpmd: support IPv6 fragments
  app/testpmd: support IPv6 fragment extension item
  net/mlx5: remove handling of ICMP fragmented packets
  net/mlx5: support match on IPv4 fragment packets
  net/mlx5: support match on IPv6 fragment packets
  net/mlx5: support match on IPv6 fragment ext. item
  doc: update release notes for MLX5 L3 frag support
  net/mlx5: enforce limitation on IPv6 next proto

 app/test-pmd/cmdline_flow.c            |  53 +++++
 doc/guides/nics/mlx5.rst               |   7 +
 doc/guides/prog_guide/rte_flow.rst     |  28 ++-
 doc/guides/rel_notes/release_20_11.rst |  10 +
 drivers/net/mlx5/mlx5_flow.c           |  62 ++++--
 drivers/net/mlx5/mlx5_flow.h           |  14 ++
 drivers/net/mlx5/mlx5_flow_dv.c        | 382 +++++++++++++++++++++++++++++----
 drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
 lib/librte_ethdev/rte_flow.c           |   1 +
 lib/librte_ethdev/rte_flow.h           |  45 +++-
 lib/librte_ip_frag/rte_ip_frag.h       |  26 +--
 lib/librte_net/rte_ip.h                |  26 ++-
 12 files changed, 573 insertions(+), 90 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 10/11] doc: update release notes for MLX5 L3 frag support
  2020-10-01 21:15  8%   ` [dpdk-dev] [PATCH v2 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
@ 2020-10-04 13:55  0%     ` Ori Kam
  0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2020-10-04 13:55 UTC (permalink / raw)
  To: Dekel Peled, NBU-Contact-Thomas Monjalon, ferruh.yigit,
	arybchenko, konstantin.ananyev, olivier.matz, wenzhuo.lu,
	beilei.xing, bernard.iremonger, Matan Azrad, Shahaf Shuler,
	Slava Ovsiienko
  Cc: dev

Hi

> -----Original Message-----
> From: Dekel Peled <dekelp@nvidia.com>
> Sent: Friday, October 2, 2020 12:15 AM
> Subject: [PATCH v2 10/11] doc: update release notes for MLX5 L3 frag support
> 
> This patch updates 20.11 release notes with the changes included in
> patches of this series:
> 1) MLX5 support of matching on IPv4/IPv6 fragmented/non-fragmented
>    packets.
> 2) ABI change in ethdev struct rte_flow_item_ipv6.
> 
> Signed-off-by: Dekel Peled <dekelp@nvidia.com>
> ---
>  doc/guides/rel_notes/release_20_11.rst | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_20_11.rst
> b/doc/guides/rel_notes/release_20_11.rst
> index 7f9d0dd..91e1773 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -90,6 +90,11 @@ New Features
> 
>    * Added support for 200G PAM4 link speed.
> 
> +* **Updated Mellanox mlx5 driver.**
> +
> +  Updated Mellanox mlx5 driver with new features and improvements,
> including:
> +
> +  * Added support for matching on fragmented/non-fragmented IPv4/IPv6
> packets.
> 
>  Removed Items
>  -------------
> @@ -215,6 +220,11 @@ ABI Changes
> 
>    * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
> 
> +  * Added extensions' attributes to struct ``rte_flow_item_ipv6``.
> +    A set of additional values added to struct, indicating the existence of
> +    every defined extension header type.
> +    Applications should use the new values for identification of existing
> +    extensions in the packet header.
> 
>  Known Issues
>  ------------
> --
> 1.8.3.1


Acked-by: Ori Kam <orika@nvidia.com>
Thanks,
Ori

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 02/11] ethdev: add IPv6 fragment extension header item
  @ 2020-10-01 21:27  4%     ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-01 21:27 UTC (permalink / raw)
  To: Dekel Peled
  Cc: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo, dev

On Fri,  2 Oct 2020 00:14:59 +0300
Dekel Peled <dekelp@nvidia.com> wrote:

> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 5b5bed2..1443e6a 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -368,6 +368,13 @@ enum rte_flow_item_type {
>  	RTE_FLOW_ITEM_TYPE_IPV6_EXT,
>  
>  	/**
> +	 * Matches the presence of IPv6 fragment extension header.
> +	 *
> +	 * See struct rte_flow_item_ipv6_frag_ext.
> +	 */
> +	RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
> +
> +	/**
>  	 * Matches any ICMPv6 header.
>  	 *
>  	 * See struct rte_flow_item_icmp6

Putting new enum value in the middle of existing list will renumber
the ones below. This causes an ABI breakage.

Since the ABI breakage was not preannounced, this patch needs to
be revised or approved by the TAB.

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2 10/11] doc: update release notes for MLX5 L3 frag support
    @ 2020-10-01 21:15  8%   ` Dekel Peled
  2020-10-04 13:55  0%     ` Ori Kam
  2020-10-05  8:35  3%   ` [dpdk-dev] [PATCH v3 00/11] support match on L3 fragmented packets Dekel Peled
  2 siblings, 1 reply; 200+ results
From: Dekel Peled @ 2020-10-01 21:15 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This patch updates 20.11 release notes with the changes included in
patches of this series:
1) MLX5 support of matching on IPv4/IPv6 fragmented/non-fragmented
   packets.
2) ABI change in ethdev struct rte_flow_item_ipv6.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7f9d0dd..91e1773 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -90,6 +90,11 @@ New Features
 
   * Added support for 200G PAM4 link speed.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on fragmented/non-fragmented IPv4/IPv6 packets.
 
 Removed Items
 -------------
@@ -215,6 +220,11 @@ ABI Changes
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
+  * Added extensions' attributes to struct ``rte_flow_item_ipv6``.
+    A set of additional values added to struct, indicating the existence of
+    every defined extension header type.
+    Applications should use the new values for identification of existing
+    extensions in the packet header.
 
 Known Issues
 ------------
-- 
1.8.3.1


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH] doc: add Vhost and Virtio updates to release note
@ 2020-10-01 10:36  4% Maxime Coquelin
  0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2020-10-01 10:36 UTC (permalink / raw)
  To: dev, ferruh.yigit, thomas, david.marchand, john.mcnamara; +Cc: Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 doc/guides/rel_notes/release_20_11.rst | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4eb3224a76..2a0f1605fe 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -78,6 +78,12 @@ New Features
     ``--portmask=N``
     where N represents the hexadecimal bitmask of ports used.
 
+* ** Updated Virtio driver.**
+
+  * Added support for Vhost-vDPA backend to Virtio-user PMD.
+  * Changed Virtio device default link speed to unknown and added support for
+    200G link speed.
+
 
 Removed Items
 -------------
@@ -91,6 +97,8 @@ Removed Items
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* vhost: Dequeue zero-copy support has been removed.
+
 
 API Changes
 -----------
@@ -172,6 +180,8 @@ API Changes
 
 * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
 
+* vhost: moved vDPA APIs from experimental to stable.
+
 
 ABI Changes
 -----------
-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2] drivers/common: mark all symbols as internal
  @ 2020-10-01  8:00  0%   ` Kinsella, Ray
  2020-10-05 23:16  0%     ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-10-01  8:00 UTC (permalink / raw)
  To: David Marchand, dev; +Cc: Anoob Joseph, Neil Horman, Liron Himi, Harman Kalra



On 01/10/2020 08:55, David Marchand wrote:
> Now that we have the internal tag, let's avoid confusion with exported
> symbols in common drivers that were using the experimental tag as a
> workaround.
> There is also no need to put internal API symbols in the public stable
> ABI.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Anoob Joseph <anoobj@marvell.com>
> ---
> Changes since v1:
> - rebased and dropped iavf bits,
> 
> ---
>  drivers/common/cpt/cpt_pmd_ops_helper.h             |  8 +++++---
>  drivers/common/cpt/rte_common_cpt_version.map       | 13 +++----------
>  drivers/common/mvep/rte_common_mvep_version.map     |  2 +-
>  drivers/common/mvep/rte_mvep_common.h               |  3 +++
>  drivers/common/octeontx/octeontx_mbox.h             |  5 +++++
>  .../common/octeontx/rte_common_octeontx_version.map |  2 +-
>  6 files changed, 18 insertions(+), 15 deletions(-)
> 
[SNIP]

Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

Results 4801-5000 of ~18000   |  | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2020-02-24 20:39     [dpdk-dev] [RFC 0/1] lib/ring: add scatter gather and serial dequeue APIs Honnappa Nagarahalli
2020-10-06 13:29     ` [dpdk-dev] [RFC v2 0/1] lib/ring: add scatter gather APIs Honnappa Nagarahalli
2020-10-06 13:29       ` [dpdk-dev] [RFC v2 1/1] " Honnappa Nagarahalli
2020-10-12 16:20         ` Ananyev, Konstantin
2020-10-12 22:31  4%       ` Honnappa Nagarahalli
2020-10-13 11:38  0%         ` Ananyev, Konstantin
2020-06-24  9:36     [dpdk-dev] [PATCH 20.11] eal: simplify exit functions Thomas Monjalon
2020-09-28  0:00     ` [dpdk-dev] [PATCH v2] " Thomas Monjalon
2020-10-08  7:51  0%   ` David Marchand
2020-06-25 16:03     [dpdk-dev] [PATCH 0/2] ethdev: tunnel offload model Gregory Etelson
2020-10-16 12:51     ` [dpdk-dev] [PATCH v8 0/3] Tunnel Offload API Gregory Etelson
2020-10-16 12:51       ` [dpdk-dev] [PATCH v8 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 15:41  3%     ` Kinsella, Ray
2020-07-21  9:51     [dpdk-dev] [PATCH 20.11 00/20] raw/ioat: enhancements and new hardware support Bruce Richardson
2020-10-07 16:29     ` [dpdk-dev] [PATCH v5 00/25] " Bruce Richardson
2020-10-07 16:30  3%   ` [dpdk-dev] [PATCH v5 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
2020-10-08  9:51     ` [dpdk-dev] [PATCH v6 00/25] raw/ioat: enhancements and new hardware support Bruce Richardson
2020-10-08  9:51  3%   ` [dpdk-dev] [PATCH v6 07/25] raw/ioat: rename functions to be operation-agnostic Bruce Richardson
2020-07-30 19:49     [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites McDaniel, Timothy
2020-10-17 19:03  3% ` [dpdk-dev] [PATCH v5 00/22] Add DLB PMD Timothy McDaniel
2020-07-30 21:06     [dpdk-dev] [PATCH v2 0/7] cmdline: support Windows Dmitry Kozlyuk
2020-09-28 21:50     ` [dpdk-dev] [PATCH v3 " Dmitry Kozlyuk
2020-10-05 15:33  0%   ` Olivier Matz
2020-08-07 12:29     [dpdk-dev] [PATCH 20.11 00/19] remove make support in DPDK Ciara Power
2020-10-09 10:21     ` [dpdk-dev] [PATCH v6 00/14] " Ciara Power
2020-10-09 10:21  9%   ` [dpdk-dev] [PATCH v6 12/14] doc: remove references to make from contributing guide Ciara Power
2020-10-09 10:21  2%   ` [dpdk-dev] [PATCH v6 14/14] doc: update patch cheatsheet to use meson Ciara Power
2020-08-13 17:23     [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-10-05 16:26  2% ` [dpdk-dev] [PATCH v3 " Vikas Gupta
2020-10-07 16:45  2%   ` [dpdk-dev] [PATCH v4 " Vikas Gupta
2020-10-07 17:18  2%     ` [dpdk-dev] [PATCH v5 " Vikas Gupta
2020-10-07 17:18           ` [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-15  0:55  3%         ` Thomas Monjalon
2020-10-09 15:00  0%       ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
2020-08-14  9:59     [dpdk-dev] [PATCH] cryptodev: revert ABI compatibility for ChaCha20-Poly1305 Adam Dybkowski
2020-10-06 12:32  9% ` David Marchand
2020-10-06 14:27  4%   ` Dybkowski, AdamX
2020-10-07 10:41  4%   ` Doherty, Declan
2020-10-07 12:06  4%     ` David Marchand
2020-10-08  8:32  9% ` [dpdk-dev] [PATCH v2 0/1] cryptodev: remove v20 ABI compatibility Adam Dybkowski
2020-10-08  8:32 14%   ` [dpdk-dev] [PATCH v2 1/1] " Adam Dybkowski
2020-10-09 17:41  4%     ` Akhil Goyal
2020-08-17 10:18     [dpdk-dev] [PATCH] meter: remove experimental alias Ferruh Yigit
2020-08-17 10:22     ` [dpdk-dev] [PATCH v2] " Ferruh Yigit
2020-10-05  9:33  0%   ` David Marchand
2020-08-17 17:49     [dpdk-dev] [RFC] ethdev: introduce Rx buffer split Slava Ovsiienko
2020-10-14 18:11     ` [dpdk-dev] [PATCH v6 0/6] " Viacheslav Ovsiienko
2020-10-14 18:11       ` [dpdk-dev] [PATCH v6 1/6] " Viacheslav Ovsiienko
2020-10-14 18:57         ` Jerin Jacob
2020-10-15  7:43           ` Slava Ovsiienko
2020-10-15  9:27  3%         ` Jerin Jacob
2020-10-15 10:27  3%           ` Jerin Jacob
2020-10-15 10:51  3%             ` Slava Ovsiienko
2020-10-15 11:26  0%               ` Jerin Jacob
2020-10-15 11:36  0%                 ` Ferruh Yigit
2020-10-15 11:49  3%                   ` Slava Ovsiienko
2020-10-15 12:49  0%                     ` Thomas Monjalon
2020-10-15 13:07  0%                       ` Andrew Rybchenko
2020-10-15 13:57  0%                         ` Slava Ovsiienko
2020-10-15 20:22  0%                         ` Slava Ovsiienko
2020-10-15  9:49             ` Andrew Rybchenko
2020-10-15 10:34  3%           ` Slava Ovsiienko
2020-10-15 11:09  0%             ` Andrew Rybchenko
2020-10-15 14:39  0%               ` Slava Ovsiienko
2020-10-16 10:22     ` [dpdk-dev] [PATCH v9 0/6] " Viacheslav Ovsiienko
2020-10-16 10:22       ` [dpdk-dev] [PATCH v9 1/6] " Viacheslav Ovsiienko
2020-10-16 11:21  4%     ` Ferruh Yigit
2020-10-16 13:08  0%       ` Slava Ovsiienko
2020-08-26 15:34     [dpdk-dev] [PATCH] crypto/scheduler: rename slave to worker Adam Dybkowski
2020-09-28 14:16     ` [dpdk-dev] [PATCH v2 0/1] " Adam Dybkowski
2020-09-28 14:16       ` [dpdk-dev] [PATCH v2 1/1] " Adam Dybkowski
2020-09-28 15:12         ` Ruifeng Wang
2020-10-06 20:49  0%       ` Akhil Goyal
2020-08-31  3:41     [dpdk-dev] [RFC 1/2] Description: lib/ethdev: change data type in tc_rxq and tc_txq Min Hu(Connor)
2020-09-27  3:16     ` [dpdk-dev] [PATCH V5 2/2] ethdev: change data type in TC rxq and TC txq Min Hu (Connor)
2020-09-28  9:04       ` Ferruh Yigit
2020-09-28  9:21         ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
2020-10-05 12:26  0%       ` Ferruh Yigit
2020-10-06 12:04  0%         ` Ferruh Yigit
2020-09-28  9:16     ` [dpdk-dev] [dpdk-techboard] [PATCH V5 1/2] dpdk: resolve compiling errors for per-queue stats Thomas Monjalon
2020-09-28 13:53       ` Ferruh Yigit
2020-09-28 15:24         ` Thomas Monjalon
2020-09-28 15:43           ` Stephen Hemminger
2020-10-05 12:23  0%         ` Ferruh Yigit
2020-10-06  8:33  0%           ` Olivier Matz
2020-10-09 20:32  0%             ` Ferruh Yigit
2020-10-10  8:09  0%               ` Thomas Monjalon
2020-10-12 17:02  0%                 ` Ferruh Yigit
2020-08-31  7:53     [dpdk-dev] [PATCH] ethdev: add rx offload to drop error packets Nipun Gupta
2020-10-05  7:15     ` [dpdk-dev] [PATCH 1/3 v2] " nipun.gupta
2020-10-05 15:34       ` Stephen Hemminger
2020-10-05 16:10         ` Jerin Jacob
2020-10-06 10:37           ` Nipun Gupta
2020-10-06 12:01  3%         ` Jerin Jacob
2020-10-06 13:10  0%           ` Nipun Gupta
2020-10-06 13:13  0%             ` Jerin Jacob
2020-10-08  8:53  0%               ` Nipun Gupta
2020-10-08  8:55  0%                 ` Jerin Jacob
2020-10-08 15:13  0%                   ` Asaf Penso
2020-09-02  9:59     [dpdk-dev] [PATCH] drivers/common: mark symbols as internal David Marchand
2020-10-01  7:55     ` [dpdk-dev] [PATCH v2] drivers/common: mark all " David Marchand
2020-10-01  8:00  0%   ` Kinsella, Ray
2020-10-05 23:16  0%     ` Thomas Monjalon
2020-09-03 16:06     [dpdk-dev] [PATCH 0/7] support PDCP-SDAP for dpaa2_sec akhil.goyal
2020-10-11 21:33     ` [dpdk-dev] [PATCH v2 0/8] " Akhil Goyal
2020-10-11 21:33  4%   ` [dpdk-dev] [PATCH v2 2/8] security: modify PDCP xform to support SDAP Akhil Goyal
2020-10-12 14:09       ` [dpdk-dev] [PATCH v3 0/8] support PDCP-SDAP for dpaa2_sec Akhil Goyal
2020-10-12 14:10  4%     ` [dpdk-dev] [PATCH v3 2/8] security: modify PDCP xform to support SDAP Akhil Goyal
2020-09-03 20:09     [dpdk-dev] [PATCH] security: update session create API akhil.goyal
2020-09-24 16:22     ` Coyle, David
2020-10-10 22:06  0%   ` Akhil Goyal
2020-10-10 22:11  2% ` [dpdk-dev] [PATCH v2] " Akhil Goyal
2020-10-13  2:12  0%   ` Lukasz Wojciechowski
2020-10-14 18:56  2%   ` [dpdk-dev] [PATCH v3] " Akhil Goyal
2020-10-15  1:11  0%     ` Lukasz Wojciechowski
2020-09-07  8:15     [dpdk-dev] [PATCH 0/2] LPM changes Ruifeng Wang
2020-09-07  8:15     ` [dpdk-dev] [PATCH 2/2] lpm: hide internal data Ruifeng Wang
2020-09-15 16:02       ` Bruce Richardson
2020-09-15 16:28         ` Medvedkin, Vladimir
2020-09-16  3:17           ` Ruifeng Wang
2020-09-30  8:45             ` Kevin Traynor
2020-10-09  6:54  0%           ` Ruifeng Wang
2020-10-13 13:53  0%             ` Kevin Traynor
2020-10-13 14:58  0%               ` Michel Machado
2020-10-13 15:41  0%                 ` Medvedkin, Vladimir
2020-10-13 17:46  0%                   ` Michel Machado
2020-10-13 19:06  0%                     ` Medvedkin, Vladimir
2020-10-13 19:48  0%                       ` Michel Machado
2020-10-14 13:10  0%                         ` Medvedkin, Vladimir
2020-10-14 23:57  0%                           ` Honnappa Nagarahalli
2020-09-07 22:50     [dpdk-dev] [PATCH] kernel: remove igb_uio Thomas Monjalon
2020-10-05  9:38  2% ` [dpdk-dev] [PATCH v3] " Thomas Monjalon
2020-10-05  9:42  2% ` [dpdk-dev] [PATCH v4] kernel/linux: " Thomas Monjalon
2020-09-08  3:05     [dpdk-dev] [PATCH 0/3] add FEC support Min Hu (Connor)
2020-10-08 10:02     ` [dpdk-dev] [PATCH V16 " Min Hu (Connor)
2020-10-08 10:02  2%   ` [dpdk-dev] [PATCH V16 1/3] ethdev: introduce FEC API Min Hu (Connor)
2020-09-11 16:58     [dpdk-dev] [PATCH 1/2] eventdev: implement ABI change Timothy McDaniel
2020-10-14 21:36  9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-14 21:36  2%   ` [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-14 21:36  6%   ` [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
2020-10-15 14:26  7%   ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Jerin Jacob
2020-10-15 14:38  4%     ` McDaniel, Timothy
2020-10-15 17:31  9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
2020-10-15 17:31  1%   ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-15 17:31  4%   ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
2020-10-15 17:31 13%   ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
2020-10-15 18:07  9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-15 18:07  1%   ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-15 18:07  4%   ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
2020-10-15 18:27  4%     ` Jerin Jacob
2020-10-15 18:07 13%   ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
2020-09-11 16:58     [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
2020-10-14 17:33  6% ` [dpdk-dev] [PATCH v3] " Timothy McDaniel
2020-10-14 20:01  4%   ` Jerin Jacob
2020-09-11 19:06     [dpdk-dev] [PATCH 00/15] Replace terms master/slave lcore with main/worker lcore Stephen Hemminger
2020-10-09 21:38     ` [dpdk-dev] [PATCH v4 00/17] Replace terms master/slave Stephen Hemminger
2020-10-09 21:38  1%   ` [dpdk-dev] [PATCH v4 03/17] eal: rename lcore word choices Stephen Hemminger
2020-10-14 15:27     ` [dpdk-dev] [PATCH v6 00/18] Replace terms master/slave Stephen Hemminger
2020-10-14 15:27  1%   ` [dpdk-dev] [PATCH v6 03/18] eal: rename lcore word choices Stephen Hemminger
2020-10-15 22:57     ` [dpdk-dev] [PATCH v7 00/20] Replace terms master/slave Stephen Hemminger
2020-10-15 22:57  1%   ` [dpdk-dev] [PATCH v7 03/20] eal: rename lcore word choices Stephen Hemminger
2020-09-14 12:53     [dpdk-dev] [PATCH v2] lib/ipsec: remove experimental tag Conor Walsh
2020-09-14 14:10     ` [dpdk-dev] [PATCH v3] ipsec: " Conor Walsh
2020-09-16 11:22       ` Ananyev, Konstantin
2020-10-05  8:59  0%     ` Kinsella, Ray
2020-10-06 20:11  0%       ` Akhil Goyal
2020-10-06 20:29  0%         ` Akhil Goyal
2020-09-14 18:19     [dpdk-dev] [PATCH v2 00/17] Replace terms master/slave Stephen Hemminger
2020-10-13 15:25     ` [dpdk-dev] [PATCH v5 00/18] " Stephen Hemminger
2020-10-13 15:25  1%   ` [dpdk-dev] [PATCH v5 03/18] eal: rename lcore word choices Stephen Hemminger
2020-09-15 16:50     [dpdk-dev] [PATCH v2 00/12] acl: introduce AVX512 classify method Konstantin Ananyev
2020-10-05 18:45  3% ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
2020-10-05 18:45 20%   ` [dpdk-dev] [PATCH v3 03/14] acl: remove of unused enum value Konstantin Ananyev
2020-10-06 15:03  3%   ` [dpdk-dev] [PATCH v4 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
2020-10-06 15:03 20%     ` [dpdk-dev] [PATCH v4 03/14] acl: remove of unused enum value Konstantin Ananyev
2020-10-06 15:05  3%   ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods David Marchand
2020-10-06 16:07  3%     ` Ananyev, Konstantin
2020-10-14  9:23  4%       ` Kinsella, Ray
2020-09-16 10:40     [dpdk-dev] [PATCH v3] mbuf: minor cleanup Morten Brørup
2020-10-07  9:16  0% ` Olivier Matz
2020-09-16 16:44     [dpdk-dev] [RFC PATCH 0/5] rework feature enabling macros for compatibility Bruce Richardson
2020-10-14 14:12     ` [dpdk-dev] [PATCH v3 0/7] Rework build macros Bruce Richardson
2020-10-14 14:13       ` [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines Bruce Richardson
2020-10-15 10:30         ` Luca Boccassi
2020-10-15 11:18           ` Bruce Richardson
2020-10-15 13:05  3%         ` Luca Boccassi
2020-10-15 14:03  3%           ` Bruce Richardson
2020-10-15 15:32  0%             ` Luca Boccassi
2020-10-15 15:34  0%               ` Bruce Richardson
2020-09-18 12:11     [dpdk-dev] [PATCH v4 0/4] abi breakage checks for meson Conor Walsh
2020-10-12  8:08  9% ` [dpdk-dev] [PATCH v5 0/4] devtools: abi breakage checks Conor Walsh
2020-10-12  8:08 21%   ` [dpdk-dev] [PATCH v5 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
2020-10-12  8:08 25%   ` [dpdk-dev] [PATCH v5 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
2020-10-12  8:08 15%   ` [dpdk-dev] [PATCH v5 3/4] devtools: change dump file not found to warning in check-abi.sh Conor Walsh
2020-10-12  8:08 20%   ` [dpdk-dev] [PATCH v5 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
2020-10-12 13:03  9%   ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Conor Walsh
2020-10-12 13:03 21%     ` [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
2020-10-14  9:38  4%       ` Kinsella, Ray
2020-10-12 13:03 25%     ` [dpdk-dev] [PATCH v6 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
2020-10-14  9:43  4%       ` Kinsella, Ray
2020-10-12 13:03 15%     ` [dpdk-dev] [PATCH v6 3/4] devtools: change dump file not found to warning in check-abi.sh Conor Walsh
2020-10-14  9:44  4%       ` Kinsella, Ray
2020-10-12 13:03 18%     ` [dpdk-dev] [PATCH v6 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
2020-10-14  9:46  0%       ` Kinsella, Ray
2020-10-14  9:37  4%     ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Kinsella, Ray
2020-10-14 10:33  4%       ` Walsh, Conor
2020-10-14 10:41 10%     ` [dpdk-dev] [PATCH v7 " Conor Walsh
2020-10-14 10:41 21%       ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
2020-10-15 10:15  4%         ` Kinsella, Ray
2020-10-14 10:41 26%       ` [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
2020-10-15 10:16  4%         ` Kinsella, Ray
2020-10-14 10:41 15%       ` [dpdk-dev] [PATCH v7 3/4] devtools: change not found to warning check-abi.sh Conor Walsh
2020-10-14 10:41 18%       ` [dpdk-dev] [PATCH v7 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
2020-09-24 16:34     [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs Fan Zhang
2020-10-09 21:11     ` [dpdk-dev] [dpdk-dev v11 " Fan Zhang
2020-10-09 21:11  3%   ` [dpdk-dev] [dpdk-dev v11 1/4] cryptodev: change crypto symmetric vector structure Fan Zhang
2020-10-11  0:32       ` [dpdk-dev] [dpdk-dev v12 0/4] cryptodev: add raw data-path APIs Fan Zhang
2020-10-11  0:32  3%     ` [dpdk-dev] [dpdk-dev v12 1/4] cryptodev: change crypto symmetric vector structure Fan Zhang
2020-10-11  0:38         ` [dpdk-dev] [dpdk-dev v13 0/4] cryptodev: add raw data-path APIs Fan Zhang
2020-10-11  0:38  3%       ` [dpdk-dev] [dpdk-dev v13 1/4] cryptodev: change crypto symmetric vector structure Fan Zhang
2020-09-30 14:10     [dpdk-dev] [PATCH 00/10] support match on L3 fragmented packets Dekel Peled
2020-10-01 21:14     ` [dpdk-dev] [PATCH v2 00/11] " Dekel Peled
2020-10-01 21:14       ` [dpdk-dev] [PATCH v2 02/11] ethdev: add IPv6 fragment extension header item Dekel Peled
2020-10-01 21:27  4%     ` Stephen Hemminger
2020-10-01 21:15  8%   ` [dpdk-dev] [PATCH v2 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
2020-10-04 13:55  0%     ` Ori Kam
2020-10-05  8:35  3%   ` [dpdk-dev] [PATCH v3 00/11] support match on L3 fragmented packets Dekel Peled
2020-10-05  8:35  8%     ` [dpdk-dev] [PATCH v3 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
2020-10-07 10:53  3%     ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Dekel Peled
2020-10-07 10:54  8%       ` [dpdk-dev] [PATCH v4 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
2020-10-07 11:15  0%       ` [dpdk-dev] [PATCH v4 00/11] support match on L3 fragmented packets Ori Kam
2020-10-12 10:42  3%       ` [dpdk-dev] [PATCH v5 " Dekel Peled
2020-10-12 10:43  8%         ` [dpdk-dev] [PATCH v5 10/11] doc: update release notes for MLX5 L3 frag support Dekel Peled
2020-10-12 19:29  0%           ` Thomas Monjalon
2020-10-13 13:32  3%         ` [dpdk-dev] [PATCH v6 0/5] support match on L3 fragmented packets Dekel Peled
2020-10-13 13:32  4%           ` [dpdk-dev] [PATCH v6 1/5] ethdev: add extensions attributes to IPv6 item Dekel Peled
2020-10-14 16:35  3% ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Dekel Peled
2020-10-14 16:35  4%   ` [dpdk-dev] [PATCH v7 1/5] ethdev: add extensions attributes to IPv6 item Dekel Peled
2020-10-14 17:18  0%   ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Ferruh Yigit
2020-09-30 17:32     [dpdk-dev] [PATCH v2 0/5] cryptodev: remove list end enumerators Arek Kusztal
2020-09-30 17:32     ` [dpdk-dev] [PATCH v2 3/5] cryptodev: remove crypto " Arek Kusztal
2020-10-08 19:58  3%   ` Akhil Goyal
2020-10-12  5:15  0%     ` Kusztal, ArkadiuszX
2020-10-12 11:46  0%       ` Akhil Goyal
2020-09-30 17:32     ` [dpdk-dev] [PATCH v2 4/5] cryptodev: remove list ends from asymmetric crypto api Arek Kusztal
2020-10-08 19:51  0%   ` Akhil Goyal
2020-10-09  7:02  0%     ` Kusztal, ArkadiuszX
2020-10-01  0:25     [dpdk-dev] [PATCH 0/4] introduce support for hairpin between two ports Bing Zhao
2020-10-08  8:51     ` [dpdk-dev] [PATCH v2 0/6] " Bing Zhao
2020-10-08  8:51  5%   ` [dpdk-dev] [PATCH v2 6/6] doc: update for two ports hairpin mode Bing Zhao
2020-10-08  9:47  0%     ` Ori Kam
2020-10-08 12:05       ` [dpdk-dev] [PATCH v3 0/6] introduce support for hairpin between two ports Bing Zhao
2020-10-08 12:05  5%     ` [dpdk-dev] [PATCH v3 6/6] doc: update for two ports hairpin mode Bing Zhao
2020-10-13 16:19         ` [dpdk-dev] [PATCH v4 0/5] introduce support for hairpin between two ports Bing Zhao
2020-10-13 16:19  4%       ` [dpdk-dev] [PATCH v4 2/5] ethdev: add new attributes to hairpin config Bing Zhao
2020-10-15  5:35     ` [dpdk-dev] [PATCH v5 0/5] introduce support for hairpin between two ports Bing Zhao
2020-10-15  5:35  4%   ` [dpdk-dev] [PATCH v5 2/5] ethdev: add new attributes to hairpin config Bing Zhao
2020-10-15 13:08     ` [dpdk-dev] [PATCH v6 0/5] introduce support for hairpin between two ports Bing Zhao
2020-10-15 13:08  4%   ` [dpdk-dev] [PATCH v6 2/5] ethdev: add new attributes to hairpin config Bing Zhao
2020-10-01 10:36  4% [dpdk-dev] [PATCH] doc: add Vhost and Virtio updates to release note Maxime Coquelin
2020-10-05 20:27  9% [dpdk-dev] [PATCH v2 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-05 20:27  2% ` [dpdk-dev] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-06  8:15  0%   ` Van Haaren, Harry
2020-10-12 19:06  0%   ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-10-05 20:27  6% ` [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
2020-10-06  8:26  4%   ` Van Haaren, Harry
2020-10-12 19:09  4%     ` Pavan Nikhilesh Bhagavatula
2020-10-13 19:20  4%       ` Jerin Jacob
2020-10-06  7:07  7% [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI Olivier Matz
2020-10-06  7:07  7% ` [dpdk-dev] [PATCH 2/2] mempool: remove experimental tags Olivier Matz
2020-10-06  8:15  4% ` [dpdk-dev] [PATCH 1/2] mempool: remove v20 ABI Bruce Richardson
2020-10-06  9:52  4% ` David Marchand
2020-10-06 11:57  4% ` David Marchand
2020-10-06 10:43  4% [dpdk-dev] [PATCH] crypto/aesni_mb: support AES-CCM-256 Pablo de Lara
2020-10-06 10:59     [dpdk-dev] [PATCH 1/3] crypto/aesni_mb: fix CCM digest size check Pablo de Lara
2020-10-06 10:59  4% ` [dpdk-dev] [PATCH 3/3] crypto/aesni_mb: support Chacha20-Poly1305 Pablo de Lara
2020-10-06 18:02     [dpdk-dev] [PATCH v7 0/8] Enable dynamic config of subport bandwidth Savinay Dharmappa
2020-10-07 14:09     ` [dpdk-dev] [PATCH v8 " Savinay Dharmappa
2020-10-07 14:09       ` [dpdk-dev] [PATCH v8 8/8] sched: remove redundant code Savinay Dharmappa
2020-10-09  8:28  3%     ` Thomas Monjalon
2020-10-09 12:39  3%   ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Savinay Dharmappa
2020-10-09 12:39  4%     ` [dpdk-dev] [PATCH v9 1/8] sched: add support profile table Savinay Dharmappa
2020-10-09 12:39  2%     ` [dpdk-dev] [PATCH v9 3/8] sched: update subport rate dynamically Savinay Dharmappa
2020-10-09 12:39  5%     ` [dpdk-dev] [PATCH v9 8/8] sched: remove redundant code Savinay Dharmappa
2020-10-11 20:11  0%     ` [dpdk-dev] [PATCH v9 0/8] Enable dynamic config of subport bandwidth Thomas Monjalon
2020-10-12  5:24  0%       ` Dharmappa, Savinay
2020-10-12 23:08  0%         ` Dharmappa, Savinay
2020-10-13 13:56  0%           ` Dharmappa, Savinay
2020-10-07  6:05  4% [dpdk-dev] 19.11 ABI changes Денис Коновалов
2020-10-07 12:18     [dpdk-dev] [PATCH 0/2] Add missing API change in release note Maxime Coquelin
2020-10-07 12:18  4% ` [dpdk-dev] [PATCH 1/2] baseband/fpga_5gnr_fec: add " Maxime Coquelin
2020-10-07 12:18  4% ` [dpdk-dev] [PATCH 2/2] baseband/fpga_lte_fec: " Maxime Coquelin
2020-10-08  9:17     [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid Ciara Loftus
2020-10-08 11:55  3% ` Ferruh Yigit
2020-10-08 23:37  4% [dpdk-dev] Techboard Minutes of Meeting - 10/8/2020 Honnappa Nagarahalli
2020-10-12 19:21     [dpdk-dev] [PATCH v4 0/2] remove list end enumerators Arek Kusztal
2020-10-12 19:21  7% ` [dpdk-dev] [PATCH v4 1/2] cryptodev: remove crypto " Arek Kusztal
2020-10-14 13:28     [dpdk-dev] [PATCH 00/11] ethdev: change device stop to return status Andrew Rybchenko
2020-10-15 13:30     ` [dpdk-dev] [PATCH v2 " Andrew Rybchenko
2020-10-15 13:30  4%   ` [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int Andrew Rybchenko
2020-10-16  9:22  0%     ` Ferruh Yigit
2020-10-16 11:20  3%     ` Kinsella, Ray
2020-10-16 17:13  0%       ` Andrew Rybchenko
2020-10-15  9:56 11% [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node Ray Kinsella
2020-10-15 10:08  0% ` David Marchand
2020-10-15 10:10  3%   ` Kinsella, Ray
2020-10-15 16:00     [dpdk-dev] performance degradation with fpic Ali Alnubani
2020-10-15 17:08     ` Bruce Richardson
2020-10-15 17:14       ` Thomas Monjalon
2020-10-15 21:44         ` Stephen Hemminger
2020-10-16  8:35  3%       ` Bruce Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).