* [dpdk-dev] [PATCH v4 05/12] ethdev: deprecate hard-to-use or ambiguous items and actions
` (3 preceding siblings ...)
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 04/12] ethdev: add represented port " Ivan Malov
@ 2021-10-13 16:42 3% ` Ivan Malov
4 siblings, 0 replies; 200+ results
From: Ivan Malov @ 2021-10-13 16:42 UTC (permalink / raw)
To: dev
Cc: Ferruh Yigit, Thomas Monjalon, Ori Kam, Andrew Rybchenko, Ray Kinsella
PF, VF and PHY_PORT require that applications have extra
knowledge of the underlying NIC and thus are hard to use.
Also, the corresponding items depend on the direction
attribute (ingress / egress), which complicates their
use in applications and interpretation in PMDs.
The concept of PORT_ID is ambiguous as it doesn't say whether
the port in question is an ethdev or the represented entity.
Items and actions PORT_REPRESENTOR, REPRESENTED_PORT
should be used instead.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
doc/guides/prog_guide/rte_flow.rst | 32 +++++++++++++++
doc/guides/rel_notes/deprecation.rst | 9 ++---
doc/guides/rel_notes/release_21_11.rst | 3 ++
lib/ethdev/rte_flow.h | 56 ++++++++++++++++++++++++++
4 files changed, 94 insertions(+), 6 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 27a17fac58..d7185c49df 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -504,6 +504,10 @@ Usage example, matching non-TCPv4 packets only:
Item: ``PF``
^^^^^^^^^^^^
+This item is deprecated. Consider:
+ - `Item: PORT_REPRESENTOR`_
+ - `Item: REPRESENTED_PORT`_
+
Matches traffic originating from (ingress) or going to (egress) the physical
function of the current device.
@@ -531,6 +535,10 @@ the application and thus not associated with a DPDK port ID.
Item: ``VF``
^^^^^^^^^^^^
+This item is deprecated. Consider:
+ - `Item: PORT_REPRESENTOR`_
+ - `Item: REPRESENTED_PORT`_
+
Matches traffic originating from (ingress) or going to (egress) a given
virtual function of the current device.
@@ -562,6 +570,10 @@ separate entities, should be addressed through their own DPDK port IDs.
Item: ``PHY_PORT``
^^^^^^^^^^^^^^^^^^
+This item is deprecated. Consider:
+ - `Item: PORT_REPRESENTOR`_
+ - `Item: REPRESENTED_PORT`_
+
Matches traffic originating from (ingress) or going to (egress) a physical
port of the underlying device.
@@ -596,6 +608,10 @@ associated with a port_id should be retrieved by other means.
Item: ``PORT_ID``
^^^^^^^^^^^^^^^^^
+This item is deprecated. Consider:
+ - `Item: PORT_REPRESENTOR`_
+ - `Item: REPRESENTED_PORT`_
+
Matches traffic originating from (ingress) or going to (egress) a given DPDK
port ID.
@@ -1950,6 +1966,10 @@ only matching traffic goes through.
Action: ``PF``
^^^^^^^^^^^^^^
+This action is deprecated. Consider:
+ - `Action: PORT_REPRESENTOR`_
+ - `Action: REPRESENTED_PORT`_
+
Directs matching traffic to the physical function (PF) of the current
device.
@@ -1970,6 +1990,10 @@ See `Item: PF`_.
Action: ``VF``
^^^^^^^^^^^^^^
+This action is deprecated. Consider:
+ - `Action: PORT_REPRESENTOR`_
+ - `Action: REPRESENTED_PORT`_
+
Directs matching traffic to a given virtual function of the current device.
Packets matched by a VF pattern item can be redirected to their original VF
@@ -1994,6 +2018,10 @@ See `Item: VF`_.
Action: ``PHY_PORT``
^^^^^^^^^^^^^^^^^^^^
+This action is deprecated. Consider:
+ - `Action: PORT_REPRESENTOR`_
+ - `Action: REPRESENTED_PORT`_
+
Directs matching traffic to a given physical port index of the underlying
device.
@@ -2013,6 +2041,10 @@ See `Item: PHY_PORT`_.
Action: ``PORT_ID``
^^^^^^^^^^^^^^^^^^^
+This action is deprecated. Consider:
+ - `Action: PORT_REPRESENTOR`_
+ - `Action: REPRESENTED_PORT`_
+
Directs matching traffic to a given DPDK port ID.
See `Item: PORT_ID`_.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5853b5988d..25aec56bec 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -113,12 +113,6 @@ Deprecation Notices
to support modifying fields larger than 64 bits.
In addition, documentation will be updated to clarify byte order.
-* ethdev: Definition of the flow API action ``RTE_FLOW_ACTION_TYPE_PORT_ID``
- is ambiguous and needs clarification.
- Structure ``rte_flow_action_port_id`` will be extended to specify
- traffic direction to the represented entity or ethdev port itself
- in DPDK 21.11.
-
* ethdev: Flow API documentation is unclear if ethdev port used to create
a flow rule adds any implicit match criteria in the case of transfer rules.
The semantics will be clarified in DPDK 21.11 and it will require fixes in
@@ -149,6 +143,9 @@ Deprecation Notices
consistent with existing outer header checksum status flag naming, which
should help in reducing confusion about its usage.
+* ethdev: Items and actions ``PF``, ``VF``, ``PHY_PORT``, ``PORT_ID`` are
+ deprecated as hard-to-use / ambiguous and will be removed in DPDK 22.11.
+
* net: The structure ``rte_ipv4_hdr`` will have two unions.
The first union is for existing ``version_ihl`` byte
and new bitfield for version and IHL.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 6c15afc1e9..75c4f6d018 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -254,6 +254,9 @@ API Changes
* ethdev: Added items and actions ``PORT_REPRESENTOR``, ``REPRESENTED_PORT`` to flow API.
+* ethdev: Deprecated items and actions ``PF``, ``VF``, ``PHY_PORT``, ``PORT_ID``.
+ Suggested items and actions ``PORT_REPRESENTOR``, ``REPRESENTED_PORT`` instead.
+
ABI Changes
-----------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index ff32c0a5ee..76653105a0 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -160,6 +160,10 @@ enum rte_flow_item_type {
RTE_FLOW_ITEM_TYPE_ANY,
/**
+ * @deprecated
+ * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
+ *
* [META]
*
* Matches traffic originating from (ingress) or going to (egress)
@@ -170,6 +174,10 @@ enum rte_flow_item_type {
RTE_FLOW_ITEM_TYPE_PF,
/**
+ * @deprecated
+ * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
+ *
* [META]
*
* Matches traffic originating from (ingress) or going to (egress) a
@@ -180,6 +188,10 @@ enum rte_flow_item_type {
RTE_FLOW_ITEM_TYPE_VF,
/**
+ * @deprecated
+ * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
+ *
* [META]
*
* Matches traffic originating from (ingress) or going to (egress) a
@@ -190,6 +202,10 @@ enum rte_flow_item_type {
RTE_FLOW_ITEM_TYPE_PHY_PORT,
/**
+ * @deprecated
+ * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
+ *
* [META]
*
* Matches traffic originating from (ingress) or going to (egress) a
@@ -640,6 +656,10 @@ static const struct rte_flow_item_any rte_flow_item_any_mask = {
#endif
/**
+ * @deprecated
+ * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
+ *
* RTE_FLOW_ITEM_TYPE_VF
*
* Matches traffic originating from (ingress) or going to (egress) a given
@@ -669,6 +689,10 @@ static const struct rte_flow_item_vf rte_flow_item_vf_mask = {
#endif
/**
+ * @deprecated
+ * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
+ *
* RTE_FLOW_ITEM_TYPE_PHY_PORT
*
* Matches traffic originating from (ingress) or going to (egress) a
@@ -700,6 +724,10 @@ static const struct rte_flow_item_phy_port rte_flow_item_phy_port_mask = {
#endif
/**
+ * @deprecated
+ * @see RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
+ *
* RTE_FLOW_ITEM_TYPE_PORT_ID
*
* Matches traffic originating from (ingress) or going to (egress) a given
@@ -1998,6 +2026,10 @@ enum rte_flow_action_type {
RTE_FLOW_ACTION_TYPE_RSS,
/**
+ * @deprecated
+ * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT
+ *
* Directs matching traffic to the physical function (PF) of the
* current device.
*
@@ -2006,6 +2038,10 @@ enum rte_flow_action_type {
RTE_FLOW_ACTION_TYPE_PF,
/**
+ * @deprecated
+ * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT
+ *
* Directs matching traffic to a given virtual function of the
* current device.
*
@@ -2014,6 +2050,10 @@ enum rte_flow_action_type {
RTE_FLOW_ACTION_TYPE_VF,
/**
+ * @deprecated
+ * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT
+ *
* Directs packets to a given physical port index of the underlying
* device.
*
@@ -2022,6 +2062,10 @@ enum rte_flow_action_type {
RTE_FLOW_ACTION_TYPE_PHY_PORT,
/**
+ * @deprecated
+ * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT
+ *
* Directs matching traffic to a given DPDK port ID.
*
* See struct rte_flow_action_port_id.
@@ -2648,6 +2692,10 @@ struct rte_flow_action_rss {
};
/**
+ * @deprecated
+ * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT
+ *
* RTE_FLOW_ACTION_TYPE_VF
*
* Directs matching traffic to a given virtual function of the current
@@ -2666,6 +2714,10 @@ struct rte_flow_action_vf {
};
/**
+ * @deprecated
+ * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT
+ *
* RTE_FLOW_ACTION_TYPE_PHY_PORT
*
* Directs packets to a given physical port index of the underlying
@@ -2680,6 +2732,10 @@ struct rte_flow_action_phy_port {
};
/**
+ * @deprecated
+ * @see RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR
+ * @see RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT
+ *
* RTE_FLOW_ACTION_TYPE_PORT_ID
*
* Directs matching traffic to a given DPDK port ID.
--
2.20.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 04/12] ethdev: add represented port action to flow API
` (2 preceding siblings ...)
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 03/12] ethdev: add port representor action " Ivan Malov
@ 2021-10-13 16:42 4% ` Ivan Malov
2021-10-13 16:42 3% ` [dpdk-dev] [PATCH v4 05/12] ethdev: deprecate hard-to-use or ambiguous items and actions Ivan Malov
4 siblings, 0 replies; 200+ results
From: Ivan Malov @ 2021-10-13 16:42 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Thomas Monjalon, Ori Kam, Andrew Rybchenko, Xiaoyun Li
For use in "transfer" flows. Supposed to send matching traffic to the
entity represented by the given ethdev, at embedded switch level.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
app/test-pmd/cmdline_flow.c | 26 +++++++++++
doc/guides/prog_guide/rte_flow.rst | 49 +++++++++++++++++++++
doc/guides/rel_notes/release_21_11.rst | 2 +-
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 5 +++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 11 ++++-
6 files changed, 92 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1496d7a067..d897d0d1d4 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -461,6 +461,8 @@ enum index {
ACTION_POL_R,
ACTION_PORT_REPRESENTOR,
ACTION_PORT_REPRESENTOR_PORT_ID,
+ ACTION_REPRESENTED_PORT,
+ ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -1454,6 +1456,7 @@ static const enum index next_action[] = {
ACTION_CONNTRACK,
ACTION_CONNTRACK_UPDATE,
ACTION_PORT_REPRESENTOR,
+ ACTION_REPRESENTED_PORT,
ZERO,
};
@@ -1740,6 +1743,12 @@ static const enum index action_port_representor[] = {
ZERO,
};
+static const enum index action_represented_port[] = {
+ ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
+ ACTION_NEXT,
+ ZERO,
+};
+
static int parse_set_raw_encap_decap(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -4836,6 +4845,23 @@ static const struct token token_list[] = {
port_id)),
.call = parse_vc_conf,
},
+ [ACTION_REPRESENTED_PORT] = {
+ .name = "represented_port",
+ .help = "at embedded switch level, send matching traffic to the entity represented by the given ethdev",
+ .priv = PRIV_ACTION(REPRESENTED_PORT,
+ sizeof(struct rte_flow_action_ethdev)),
+ .next = NEXT(action_represented_port),
+ .call = parse_vc,
+ },
+ [ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID] = {
+ .name = "ethdev_port_id",
+ .help = "ethdev port ID",
+ .next = NEXT(action_represented_port,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_ethdev,
+ port_id)),
+ .call = parse_vc_conf,
+ },
/* Indirect action destroy arguments. */
[INDIRECT_ACTION_DESTROY_ID] = {
.name = "action_id",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 587ed37c2c..27a17fac58 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1532,6 +1532,8 @@ at the opposite end of the "wire" leading to the ethdev.
This item is meant to use the same structure as `Item: PORT_REPRESENTOR`_.
+See also `Action: REPRESENTED_PORT`_.
+
Actions
~~~~~~~
@@ -3145,6 +3147,53 @@ at the opposite end of the "wire" leading to the ethdev.
See also `Item: PORT_REPRESENTOR`_.
+Action: ``REPRESENTED_PORT``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+At embedded switch level, send matching traffic to
+the entity represented by the given ethdev.
+
+Term **ethdev** and the concept of **port representor** are synonymous.
+The **represented port** is an *entity* plugged to the embedded switch
+at the opposite end of the "wire" leading to the ethdev.
+
+::
+
+ .--------------------.
+ | PORT_REPRESENTOR | Ethdev (Application Port Referred to by its ID)
+ '--------------------'
+ :
+ :
+ .----------------.
+ | Logical Port |
+ '----------------'
+ :
+ :
+ :
+ :
+ .----------. .--------------------.
+ | Switch | <== | Matching Traffic |
+ '----------' '--------------------'
+ ||
+ ||
+ ||
+ \/
+ .----------------.
+ | Logical Port |
+ '----------------'
+ ||
+ \/
+ .--------------------.
+ | REPRESENTED_PORT | Net / Guest / Another Ethdev (Same Application)
+ '--------------------'
+
+
+- Requires `Attribute: Transfer`_.
+
+This action is meant to use the same structure as `Action: PORT_REPRESENTOR`_.
+
+See also `Item: REPRESENTED_PORT`_.
+
Negative types
~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 9a0ab97914..6c15afc1e9 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -252,7 +252,7 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
-* ethdev: Added items ``PORT_REPRESENTOR``, ``REPRESENTED_PORT`` and action ``PORT_REPRESENTOR`` to flow API.
+* ethdev: Added items and actions ``PORT_REPRESENTOR``, ``REPRESENTED_PORT`` to flow API.
ABI Changes
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 46b7f07cbc..22ba8f0516 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4090,6 +4090,11 @@ This section lists supported actions and their attributes, if any.
- ``port_id {unsigned}``: ethdev port ID
+- ``represented_port``: at embedded switch level, send matching traffic to
+ the entity represented by the given ethdev
+
+ - ``ethdev_port_id {unsigned}``: ethdev port ID
+
Destroying flow rules
~~~~~~~~~~~~~~~~~~~~~
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index b074b1c77d..542e40e496 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -192,6 +192,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
MK_FLOW_ACTION(INDIRECT, 0),
MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
+ MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
};
int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 56fd4393e5..ff32c0a5ee 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2462,6 +2462,14 @@ enum rte_flow_action_type {
* @see struct rte_flow_action_ethdev
*/
RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR,
+
+ /**
+ * At embedded switch level, send matching traffic to
+ * the entity represented by the given ethdev.
+ *
+ * @see struct rte_flow_action_ethdev
+ */
+ RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
};
/**
@@ -3212,7 +3220,8 @@ struct rte_flow_action_meter_color {
* @b EXPERIMENTAL: this structure may change without prior notice
*
* Provides an ethdev port ID for use with the following actions:
- * RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR.
+ * RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR,
+ * RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT.
*/
struct rte_flow_action_ethdev {
uint16_t port_id; /**< ethdev port ID */
--
2.20.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 03/12] ethdev: add port representor action to flow API
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 01/12] ethdev: add port representor item to " Ivan Malov
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 02/12] ethdev: add represented port " Ivan Malov
@ 2021-10-13 16:42 4% ` Ivan Malov
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 04/12] ethdev: add represented port " Ivan Malov
2021-10-13 16:42 3% ` [dpdk-dev] [PATCH v4 05/12] ethdev: deprecate hard-to-use or ambiguous items and actions Ivan Malov
4 siblings, 0 replies; 200+ results
From: Ivan Malov @ 2021-10-13 16:42 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Thomas Monjalon, Ori Kam, Andrew Rybchenko, Xiaoyun Li
For use in "transfer" flows. Supposed to send matching traffic to
the given ethdev (to the application), at embedded switch level.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
app/test-pmd/cmdline_flow.c | 26 ++++++++++
doc/guides/prog_guide/rte_flow.rst | 56 +++++++++++++++++++++
doc/guides/rel_notes/release_21_11.rst | 2 +-
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 5 ++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 18 +++++++
6 files changed, 107 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 354f0fb2d7..1496d7a067 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -459,6 +459,8 @@ enum index {
ACTION_POL_G,
ACTION_POL_Y,
ACTION_POL_R,
+ ACTION_PORT_REPRESENTOR,
+ ACTION_PORT_REPRESENTOR_PORT_ID,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -1451,6 +1453,7 @@ static const enum index next_action[] = {
ACTION_MODIFY_FIELD,
ACTION_CONNTRACK,
ACTION_CONNTRACK_UPDATE,
+ ACTION_PORT_REPRESENTOR,
ZERO,
};
@@ -1731,6 +1734,12 @@ static const enum index action_update_conntrack[] = {
ZERO,
};
+static const enum index action_port_representor[] = {
+ ACTION_PORT_REPRESENTOR_PORT_ID,
+ ACTION_NEXT,
+ ZERO,
+};
+
static int parse_set_raw_encap_decap(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -4810,6 +4819,23 @@ static const struct token token_list[] = {
.next = NEXT(action_update_conntrack),
.call = parse_vc_action_conntrack_update,
},
+ [ACTION_PORT_REPRESENTOR] = {
+ .name = "port_representor",
+ .help = "at embedded switch level, send matching traffic to the given ethdev",
+ .priv = PRIV_ACTION(PORT_REPRESENTOR,
+ sizeof(struct rte_flow_action_ethdev)),
+ .next = NEXT(action_port_representor),
+ .call = parse_vc,
+ },
+ [ACTION_PORT_REPRESENTOR_PORT_ID] = {
+ .name = "port_id",
+ .help = "ethdev port ID",
+ .next = NEXT(action_port_representor,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_ethdev,
+ port_id)),
+ .call = parse_vc_conf,
+ },
/* Indirect action destroy arguments. */
[INDIRECT_ACTION_DESTROY_ID] = {
.name = "action_id",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2da286dce8..587ed37c2c 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1484,6 +1484,8 @@ at the opposite end of the "wire" leading to the ethdev.
- Default ``mask`` provides exact match behaviour.
+See also `Action: PORT_REPRESENTOR`_.
+
Item: ``REPRESENTED_PORT``
^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -3089,6 +3091,60 @@ which is set in the packet meta-data (i.e. struct ``rte_mbuf::sched::color``)
| ``meter_color`` | Packet color |
+-----------------+--------------+
+Action: ``PORT_REPRESENTOR``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+At embedded switch level, send matching traffic to the given ethdev.
+
+Term **ethdev** and the concept of **port representor** are synonymous.
+The **represented port** is an *entity* plugged to the embedded switch
+at the opposite end of the "wire" leading to the ethdev.
+
+::
+
+ .--------------------.
+ | PORT_REPRESENTOR | Ethdev (Application Port Referred to by its ID)
+ '--------------------'
+ /\
+ ||
+ .----------------.
+ | Logical Port |
+ '----------------'
+ /\
+ ||
+ ||
+ ||
+ .----------. .--------------------.
+ | Switch | <== | Matching Traffic |
+ '----------' '--------------------'
+ :
+ :
+ :
+ :
+ .----------------.
+ | Logical Port |
+ '----------------'
+ :
+ :
+ .--------------------.
+ | REPRESENTED_PORT | Net / Guest / Another Ethdev (Same Application)
+ '--------------------'
+
+
+- Requires `Attribute: Transfer`_.
+
+.. _table_rte_flow_action_ethdev:
+
+.. table:: ``struct rte_flow_action_ethdev``
+
+ +-------------+----------------+
+ | Field | Value |
+ +=============+================+
+ | ``port_id`` | ethdev port ID |
+ +-------------+----------------+
+
+See also `Item: PORT_REPRESENTOR`_.
+
Negative types
~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b9f918cab8..9a0ab97914 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -252,7 +252,7 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
-* ethdev: Added items ``PORT_REPRESENTOR``, ``REPRESENTED_PORT`` to flow API.
+* ethdev: Added items ``PORT_REPRESENTOR``, ``REPRESENTED_PORT`` and action ``PORT_REPRESENTOR`` to flow API.
ABI Changes
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 61669d1d5a..46b7f07cbc 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4085,6 +4085,11 @@ This section lists supported actions and their attributes, if any.
- ``type {value}``: Set color type with specified value(green/yellow/red)
+- ``port_representor``: at embedded switch level, send matching traffic to
+ the given ethdev
+
+ - ``port_id {unsigned}``: ethdev port ID
+
Destroying flow rules
~~~~~~~~~~~~~~~~~~~~~
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index d4b654a2c6..b074b1c77d 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -191,6 +191,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
*/
MK_FLOW_ACTION(INDIRECT, 0),
MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
+ MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
};
int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b50c3d38b5..56fd4393e5 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2455,6 +2455,13 @@ enum rte_flow_action_type {
* See struct rte_flow_action_meter_color.
*/
RTE_FLOW_ACTION_TYPE_METER_COLOR,
+
+ /**
+ * At embedded switch level, sends matching traffic to the given ethdev.
+ *
+ * @see struct rte_flow_action_ethdev
+ */
+ RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR,
};
/**
@@ -3200,6 +3207,17 @@ struct rte_flow_action_meter_color {
enum rte_color color; /**< Packet color. */
};
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * Provides an ethdev port ID for use with the following actions:
+ * RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR.
+ */
+struct rte_flow_action_ethdev {
+ uint16_t port_id; /**< ethdev port ID */
+};
+
/**
* Field IDs for MODIFY_FIELD action.
*/
--
2.20.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 02/12] ethdev: add represented port item to flow API
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 01/12] ethdev: add port representor item to " Ivan Malov
@ 2021-10-13 16:42 4% ` Ivan Malov
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 03/12] ethdev: add port representor action " Ivan Malov
` (2 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Ivan Malov @ 2021-10-13 16:42 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Thomas Monjalon, Ori Kam, Andrew Rybchenko, Xiaoyun Li
For use in "transfer" flows. Supposed to match traffic entering the
embedded switch from the entity represented by the given ethdev.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
app/test-pmd/cmdline_flow.c | 25 +++++++++++
doc/guides/prog_guide/rte_flow.rst | 46 +++++++++++++++++++++
doc/guides/rel_notes/release_21_11.rst | 2 +-
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 5 +++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 13 +++++-
6 files changed, 90 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5c480db91d..354f0fb2d7 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -308,6 +308,8 @@ enum index {
ITEM_POL_POLICY,
ITEM_PORT_REPRESENTOR,
ITEM_PORT_REPRESENTOR_PORT_ID,
+ ITEM_REPRESENTED_PORT,
+ ITEM_REPRESENTED_PORT_ETHDEV_PORT_ID,
/* Validate/create actions. */
ACTIONS,
@@ -1002,6 +1004,7 @@ static const enum index next_item[] = {
ITEM_INTEGRITY,
ITEM_CONNTRACK,
ITEM_PORT_REPRESENTOR,
+ ITEM_REPRESENTED_PORT,
END_SET,
ZERO,
};
@@ -1376,6 +1379,12 @@ static const enum index item_port_representor[] = {
ZERO,
};
+static const enum index item_represented_port[] = {
+ ITEM_REPRESENTED_PORT_ETHDEV_PORT_ID,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3630,6 +3639,21 @@ static const struct token token_list[] = {
item_param),
.args = ARGS(ARGS_ENTRY(struct rte_flow_item_ethdev, port_id)),
},
+ [ITEM_REPRESENTED_PORT] = {
+ .name = "represented_port",
+ .help = "match traffic entering the embedded switch from the entity represented by the given ethdev",
+ .priv = PRIV_ITEM(REPRESENTED_PORT,
+ sizeof(struct rte_flow_item_ethdev)),
+ .next = NEXT(item_represented_port),
+ .call = parse_vc,
+ },
+ [ITEM_REPRESENTED_PORT_ETHDEV_PORT_ID] = {
+ .name = "ethdev_port_id",
+ .help = "ethdev port ID",
+ .next = NEXT(item_represented_port, NEXT_ENTRY(COMMON_UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_ethdev, port_id)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
@@ -8358,6 +8382,7 @@ flow_item_default_mask(const struct rte_flow_item *item)
mask = &rte_flow_item_pfcp_mask;
break;
case RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR:
+ case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT:
mask = &rte_flow_item_ethdev_mask;
break;
default:
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index d194640469..2da286dce8 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1484,6 +1484,52 @@ at the opposite end of the "wire" leading to the ethdev.
- Default ``mask`` provides exact match behaviour.
+Item: ``REPRESENTED_PORT``
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches traffic entering the embedded switch from
+the entity represented by the given ethdev.
+
+Term **ethdev** and the concept of **port representor** are synonymous.
+The **represented port** is an *entity* plugged to the embedded switch
+at the opposite end of the "wire" leading to the ethdev.
+
+::
+
+ .--------------------.
+ | PORT_REPRESENTOR | Ethdev (Application Port Referred to by its ID)
+ '--------------------'
+ :
+ :
+ .----------------.
+ | Logical Port |
+ '----------------'
+ :
+ :
+ :
+ :
+ .----------.
+ | Switch |
+ '----------'
+ /\
+ ||
+ ||
+ ||
+ .----------------.
+ | Logical Port |
+ '----------------'
+ /\
+ ||
+ .--------------------.
+ | REPRESENTED_PORT | Net / Guest / Another Ethdev (Same Application)
+ '--------------------'
+
+
+- Incompatible with `Attribute: Traffic direction`_.
+- Requires `Attribute: Transfer`_.
+
+This item is meant to use the same structure as `Item: PORT_REPRESENTOR`_.
+
Actions
~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 07f9d39a5b..b9f918cab8 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -252,7 +252,7 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
-* ethdev: Added item ``PORT_REPRESENTOR`` to flow API.
+* ethdev: Added items ``PORT_REPRESENTOR``, ``REPRESENTED_PORT`` to flow API.
ABI Changes
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 90765f9090..61669d1d5a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3805,6 +3805,11 @@ This section lists supported pattern items and their attributes, if any.
- ``port_id {unsigned}``: ethdev port ID
+- ``represented_port``: match traffic entering the embedded switch from
+ the entity represented by the given ethdev
+
+ - ``ethdev_port_id {unsigned}``: ethdev port ID
+
Actions list
^^^^^^^^^^^^
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 5e9317c6d1..d4b654a2c6 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -101,6 +101,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
MK_FLOW_ITEM(PORT_REPRESENTOR, sizeof(struct rte_flow_item_ethdev)),
+ MK_FLOW_ITEM(REPRESENTED_PORT, sizeof(struct rte_flow_item_ethdev)),
};
/** Generate flow_action[] entry. */
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 1e3ef77ead..b50c3d38b5 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -583,6 +583,16 @@ enum rte_flow_item_type {
* @see struct rte_flow_item_ethdev
*/
RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR,
+
+ /**
+ * [META]
+ *
+ * Matches traffic entering the embedded switch from
+ * the entity represented by the given ethdev.
+ *
+ * @see struct rte_flow_item_ethdev
+ */
+ RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
};
/**
@@ -1813,7 +1823,8 @@ static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
* @b EXPERIMENTAL: this structure may change without prior notice
*
* Provides an ethdev port ID for use with the following items:
- * RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR.
+ * RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR,
+ * RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT.
*/
struct rte_flow_item_ethdev {
uint16_t port_id; /**< ethdev port ID */
--
2.20.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 01/12] ethdev: add port representor item to flow API
@ 2021-10-13 16:42 4% ` Ivan Malov
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 02/12] ethdev: add represented port " Ivan Malov
` (3 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Ivan Malov @ 2021-10-13 16:42 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Thomas Monjalon, Ori Kam, Andrew Rybchenko, Xiaoyun Li
For use in "transfer" flows. Supposed to match traffic
entering the embedded switch from the given ethdev.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
app/test-pmd/cmdline_flow.c | 27 ++++++++++
doc/guides/prog_guide/rte_flow.rst | 59 +++++++++++++++++++++
doc/guides/rel_notes/release_21_11.rst | 2 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 ++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 27 ++++++++++
6 files changed, 120 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0b5856c7d5..5c480db91d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -306,6 +306,8 @@ enum index {
ITEM_POL_PORT,
ITEM_POL_METER,
ITEM_POL_POLICY,
+ ITEM_PORT_REPRESENTOR,
+ ITEM_PORT_REPRESENTOR_PORT_ID,
/* Validate/create actions. */
ACTIONS,
@@ -999,6 +1001,7 @@ static const enum index next_item[] = {
ITEM_GENEVE_OPT,
ITEM_INTEGRITY,
ITEM_CONNTRACK,
+ ITEM_PORT_REPRESENTOR,
END_SET,
ZERO,
};
@@ -1367,6 +1370,12 @@ static const enum index item_integrity_lv[] = {
ZERO,
};
+static const enum index item_port_representor[] = {
+ ITEM_PORT_REPRESENTOR_PORT_ID,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3606,6 +3615,21 @@ static const struct token token_list[] = {
item_param),
.args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack, flags)),
},
+ [ITEM_PORT_REPRESENTOR] = {
+ .name = "port_representor",
+ .help = "match traffic entering the embedded switch from the given ethdev",
+ .priv = PRIV_ITEM(PORT_REPRESENTOR,
+ sizeof(struct rte_flow_item_ethdev)),
+ .next = NEXT(item_port_representor),
+ .call = parse_vc,
+ },
+ [ITEM_PORT_REPRESENTOR_PORT_ID] = {
+ .name = "port_id",
+ .help = "ethdev port ID",
+ .next = NEXT(item_port_representor, NEXT_ENTRY(COMMON_UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_ethdev, port_id)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
@@ -8333,6 +8357,9 @@ flow_item_default_mask(const struct rte_flow_item *item)
case RTE_FLOW_ITEM_TYPE_PFCP:
mask = &rte_flow_item_pfcp_mask;
break;
+ case RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR:
+ mask = &rte_flow_item_ethdev_mask;
+ break;
default:
break;
}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3cb014c1fa..d194640469 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1425,6 +1425,65 @@ Matches a conntrack state after conntrack action.
- ``flags``: conntrack packet state flags.
- Default ``mask`` matches all state bits.
+Item: ``PORT_REPRESENTOR``
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches traffic entering the embedded switch from the given ethdev.
+
+Term **ethdev** and the concept of **port representor** are synonymous.
+The **represented port** is an *entity* plugged to the embedded switch
+at the opposite end of the "wire" leading to the ethdev.
+
+::
+
+ .--------------------.
+ | PORT_REPRESENTOR | Ethdev (Application Port Referred to by its ID)
+ '--------------------'
+ ||
+ \/
+ .----------------.
+ | Logical Port |
+ '----------------'
+ ||
+ ||
+ ||
+ \/
+ .----------.
+ | Switch |
+ '----------'
+ :
+ :
+ :
+ :
+ .----------------.
+ | Logical Port |
+ '----------------'
+ :
+ :
+ .--------------------.
+ | REPRESENTED_PORT | Net / Guest / Another Ethdev (Same Application)
+ '--------------------'
+
+
+- Incompatible with `Attribute: Traffic direction`_.
+- Requires `Attribute: Transfer`_.
+
+.. _table_rte_flow_item_ethdev:
+
+.. table:: ``struct rte_flow_item_ethdev``
+
+ +----------+-------------+---------------------------+
+ | Field | Subfield | Value |
+ +==========+=============+===========================+
+ | ``spec`` | ``port_id`` | ethdev port ID |
+ +----------+-------------+---------------------------+
+ | ``last`` | ``port_id`` | upper range value |
+ +----------+-------------+---------------------------+
+ | ``mask`` | ``port_id`` | zeroed for wildcard match |
+ +----------+-------------+---------------------------+
+
+- Default ``mask`` provides exact match behaviour.
+
Actions
~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d5c762df62..07f9d39a5b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -252,6 +252,8 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* ethdev: Added item ``PORT_REPRESENTOR`` to flow API.
+
ABI Changes
-----------
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a0efb7d0b0..90765f9090 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3801,6 +3801,10 @@ This section lists supported pattern items and their attributes, if any.
- ``conntrack``: match conntrack state.
+- ``port_representor``: match traffic entering the embedded switch from the given ethdev
+
+ - ``port_id {unsigned}``: ethdev port ID
+
Actions list
^^^^^^^^^^^^
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 8cb7a069c8..5e9317c6d1 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -100,6 +100,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
+ MK_FLOW_ITEM(PORT_REPRESENTOR, sizeof(struct rte_flow_item_ethdev)),
};
/** Generate flow_action[] entry. */
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 5f87851f8c..1e3ef77ead 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -574,6 +574,15 @@ enum rte_flow_item_type {
* @see struct rte_flow_item_conntrack.
*/
RTE_FLOW_ITEM_TYPE_CONNTRACK,
+
+ /**
+ * [META]
+ *
+ * Matches traffic entering the embedded switch from the given ethdev.
+ *
+ * @see struct rte_flow_item_ethdev
+ */
+ RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR,
};
/**
@@ -1799,6 +1808,24 @@ static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
};
#endif
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * Provides an ethdev port ID for use with the following items:
+ * RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR.
+ */
+struct rte_flow_item_ethdev {
+ uint16_t port_id; /**< ethdev port ID */
+};
+
+/** Default mask for items based on struct rte_flow_item_ethdev */
+#ifndef __cplusplus
+static const struct rte_flow_item_ethdev rte_flow_item_ethdev_mask = {
+ .port_id = 0xffff,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
2.20.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 3/6] ethdev: copy fast-path API into separate structure
2021-10-13 13:37 2% ` [dpdk-dev] [PATCH v6 3/6] ethdev: copy fast-path API into separate structure Konstantin Ananyev
@ 2021-10-13 14:25 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-13 14:25 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
ferruh.yigit, mdr, jay.jayatheerthan
On 10/13/21 4:37 PM, Konstantin Ananyev wrote:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for fast-path calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The whole idea beyond this new schema:
> 1. PMDs keep to setup fast-path function pointers and related data
> inside rte_eth_dev struct in the same way they did it before.
> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
> (for secondary process) we call eth_dev_fp_ops_setup, which
> copies these function and data pointers into rte_eth_fp_ops[port_id].
> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
> we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
> into some dummy values.
> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
> flat array to call PMD specific functions.
> That approach should allow us to make rte_eth_devices[] private
> without introducing regression and help to avoid changes in drivers code.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v6 6/6] ethdev: hide eth dev related structures
2021-10-13 13:36 4% ` [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures Konstantin Ananyev
` (2 preceding siblings ...)
2021-10-13 13:37 2% ` [dpdk-dev] [PATCH v6 4/6] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
@ 2021-10-13 13:37 8% ` Konstantin Ananyev
3 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-13 13:37 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 +
drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/net/cxgbe/base/adapter.h | 2 +-
drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
drivers/net/netvsc/hn_var.h | 1 +
lib/ethdev/ethdev_driver.h | 154 ++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 149 -----------------
lib/ethdev/version.map | 2 +-
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/metrics/rte_metrics_telemetry.c | 2 +-
13 files changed, 170 insertions(+), 158 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d44c1696cd..626448988d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -310,6 +310,12 @@ ABI Changes
to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
is used by public inline function ``rte_eth_rx_queue_count``.
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+ private data structures. ``rte_eth_devices[]`` can't be accessed directly
+ by user any more. While it is an ABI breakage, this change is intended
+ to be transparent for both users (no changes in user app is required) and
+ PMD developers (no changes in PMD is required).
+
Known Issues
------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
#include <rte_atomic.h>
#include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_spinlock.h>
#include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
#include <cryptodev_pmd.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_event_crypto_adapter.h>
#include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
#include <rte_mbuf.h>
#include <rte_io.h>
#include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include "../cxgbe_compat.h"
#include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
#include <unistd.h>
#include <stdarg.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_log.h>
#include <rte_eth_ctrl.h>
#include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 18703f99b9..fbb3995507 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
*/
#include <rte_eal_paging.h>
+#include <ethdev_driver.h>
/*
* Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 56db53df1a..0174ba03d7 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,160 @@
#include <rte_ethdev.h>
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+ struct rte_eth_rxtx_callback *next;
+ union{
+ rte_rx_callback_fn rx;
+ rte_tx_callback_fn tx;
+ } fn;
+ void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+ eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+ eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+ eth_tx_prep_t tx_pkt_prepare;
+ /**< Pointer to PMD transmit prepare function. */
+ eth_rx_queue_count_t rx_queue_count;
+ /**< Get the number of used RX descriptors. */
+ eth_rx_descriptor_status_t rx_descriptor_status;
+ /**< Check the status of a Rx descriptor. */
+ eth_tx_descriptor_status_t tx_descriptor_status;
+ /**< Check the status of a Tx descriptor. */
+
+ /**
+ * points to device data that is shared between
+ * primary and secondary processes.
+ */
+ struct rte_eth_dev_data *data;
+ void *process_private; /**< Pointer to per-process device data. */
+ const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+ struct rte_device *device; /**< Backing device */
+ struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+ /** User application callbacks for NIC interrupts */
+ struct rte_eth_dev_cb_list link_intr_cbs;
+ /**
+ * User-supplied functions called from rx_burst to post-process
+ * received packets before passing them to the user
+ */
+ struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ /**
+ * User-supplied functions called from tx_burst to pre-process
+ * received packets before passing them to the driver for transmission.
+ */
+ struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ enum rte_eth_dev_state state; /**< Flag indicating the port state */
+ void *security_ctx; /**< Context for security ops */
+
+ uint64_t reserved_64s[4]; /**< Reserved for future fields */
+ void *reserved_ptrs[4]; /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+ char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+ void **rx_queues; /**< Array of pointers to RX queues. */
+ void **tx_queues; /**< Array of pointers to TX queues. */
+ uint16_t nb_rx_queues; /**< Number of RX queues. */
+ uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+ struct rte_eth_dev_sriov sriov; /**< SRIOV data */
+
+ void *dev_private;
+ /**< PMD-specific private data.
+ * @see rte_eth_dev_release_port()
+ */
+
+ struct rte_eth_link dev_link; /**< Link-level information & status. */
+ struct rte_eth_conf dev_conf; /**< Configuration applied to device. */
+ uint16_t mtu; /**< Maximum Transmission Unit. */
+ uint32_t min_rx_buf_size;
+ /**< Common RX buffer size handled by all queues. */
+
+ uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+ struct rte_ether_addr *mac_addrs;
+ /**< Device Ethernet link address.
+ * @see rte_eth_dev_release_port()
+ */
+ uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+ /**< Bitmap associating MAC addresses to pools. */
+ struct rte_ether_addr *hash_mac_addrs;
+ /**< Device Ethernet MAC addresses of hash filtering.
+ * @see rte_eth_dev_release_port()
+ */
+ uint16_t port_id; /**< Device [external] port identifier. */
+
+ __extension__
+ uint8_t promiscuous : 1,
+ /**< RX promiscuous mode ON(1) / OFF(0). */
+ scattered_rx : 1,
+ /**< RX of scattered packets is ON(1) / OFF(0) */
+ all_multicast : 1,
+ /**< RX all multicast mode ON(1) / OFF(0). */
+ dev_started : 1,
+ /**< Device state: STARTED(1) / STOPPED(0). */
+ lro : 1,
+ /**< RX LRO is ON(1) / OFF(0) */
+ dev_configured : 1;
+ /**< Indicates whether the device is configured.
+ * CONFIGURED(1) / NOT CONFIGURED(0).
+ */
+ uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+ /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+ uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+ /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+ uint32_t dev_flags; /**< Capabilities. */
+ int numa_node; /**< NUMA node connection. */
+ struct rte_vlan_filter_conf vlan_filter_conf;
+ /**< VLAN filter configuration. */
+ struct rte_eth_dev_owner owner; /**< The port owner. */
+ uint16_t representor_id;
+ /**< Switch-specific identifier.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
+ uint16_t backer_port_id;
+ /**< Port ID of the backing device.
+ * This device will be used to query representor
+ * info and calculate representor IDs.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
+
+ pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+ uint64_t reserved_64s[4]; /**< Reserved for future fields */
+ void *reserved_ptrs[4]; /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
/**< @internal Declaration of the hairpin peer queue information structure. */
struct rte_hairpin_peer_info;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index df7168ca4b..2b8660c578 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -109,153 +109,4 @@ struct rte_eth_fp_ops {
extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
- struct rte_eth_rxtx_callback *next;
- union{
- rte_rx_callback_fn rx;
- rte_tx_callback_fn tx;
- } fn;
- void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
- eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
- eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
- eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
- eth_rx_queue_count_t rx_queue_count; /**< Get the number of used RX descriptors. */
- eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
- eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
- /**
- * Next two fields are per-device data but *data is shared between
- * primary and secondary processes and *process_private is per-process
- * private. The second one is managed by PMDs if necessary.
- */
- struct rte_eth_dev_data *data; /**< Pointer to device data. */
- void *process_private; /**< Pointer to per-process device data. */
- const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
- struct rte_device *device; /**< Backing device */
- struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
- /** User application callbacks for NIC interrupts */
- struct rte_eth_dev_cb_list link_intr_cbs;
- /**
- * User-supplied functions called from rx_burst to post-process
- * received packets before passing them to the user
- */
- struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
- /**
- * User-supplied functions called from tx_burst to pre-process
- * received packets before passing them to the driver for transmission.
- */
- struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
- enum rte_eth_dev_state state; /**< Flag indicating the port state */
- void *security_ctx; /**< Context for security ops */
-
- uint64_t reserved_64s[4]; /**< Reserved for future fields */
- void *reserved_ptrs[4]; /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
- char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
- void **rx_queues; /**< Array of pointers to RX queues. */
- void **tx_queues; /**< Array of pointers to TX queues. */
- uint16_t nb_rx_queues; /**< Number of RX queues. */
- uint16_t nb_tx_queues; /**< Number of TX queues. */
-
- struct rte_eth_dev_sriov sriov; /**< SRIOV data */
-
- void *dev_private;
- /**< PMD-specific private data.
- * @see rte_eth_dev_release_port()
- */
-
- struct rte_eth_link dev_link; /**< Link-level information & status. */
- struct rte_eth_conf dev_conf; /**< Configuration applied to device. */
- uint16_t mtu; /**< Maximum Transmission Unit. */
- uint32_t min_rx_buf_size;
- /**< Common RX buffer size handled by all queues. */
-
- uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
- struct rte_ether_addr *mac_addrs;
- /**< Device Ethernet link address.
- * @see rte_eth_dev_release_port()
- */
- uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
- /**< Bitmap associating MAC addresses to pools. */
- struct rte_ether_addr *hash_mac_addrs;
- /**< Device Ethernet MAC addresses of hash filtering.
- * @see rte_eth_dev_release_port()
- */
- uint16_t port_id; /**< Device [external] port identifier. */
-
- __extension__
- uint8_t promiscuous : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
- scattered_rx : 1, /**< RX of scattered packets is ON(1) / OFF(0) */
- all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
- dev_started : 1, /**< Device state: STARTED(1) / STOPPED(0). */
- lro : 1, /**< RX LRO is ON(1) / OFF(0) */
- dev_configured : 1;
- /**< Indicates whether the device is configured.
- * CONFIGURED(1) / NOT CONFIGURED(0).
- */
- uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
- /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
- uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
- /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
- uint32_t dev_flags; /**< Capabilities. */
- int numa_node; /**< NUMA node connection. */
- struct rte_vlan_filter_conf vlan_filter_conf;
- /**< VLAN filter configuration. */
- struct rte_eth_dev_owner owner; /**< The port owner. */
- uint16_t representor_id;
- /**< Switch-specific identifier.
- * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
- */
- uint16_t backer_port_id;
- /**< Port ID of the backing device.
- * This device will be used to query representor
- * info and calculate representor IDs.
- * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
- */
-
- pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
- uint64_t reserved_64s[4]; /**< Reserved for future fields */
- void *reserved_ptrs[4]; /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
#endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index ca81f5d237..96ac8abb6b 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -73,7 +73,6 @@ DPDK_22 {
rte_eth_dev_udp_tunnel_port_add;
rte_eth_dev_udp_tunnel_port_delete;
rte_eth_dev_vlan_filter;
- rte_eth_devices;
rte_eth_find_next;
rte_eth_find_next_of;
rte_eth_find_next_owned_by;
@@ -269,6 +268,7 @@ INTERNAL {
rte_eth_dev_release_port;
rte_eth_dev_internal_reset;
rte_eth_devargs_parse;
+ rte_eth_devices;
rte_eth_dma_zone_free;
rte_eth_dma_zone_reserve;
rte_eth_hairpin_queue_peer_bind;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
#include <rte_common.h>
#include <rte_dev.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_log.h>
#include <rte_malloc.h>
#include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
*/
#include <rte_spinlock.h>
#include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include "eventdev_pmd.h"
#include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
#include <rte_telemetry.h>
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef613..5be21b2e86 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
* Copyright(c) 2020 Intel Corporation
*/
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_string_fns.h>
#ifdef RTE_LIB_TELEMETRY
#include <telemetry_internal.h>
--
2.26.3
^ permalink raw reply [relevance 8%]
* [dpdk-dev] [PATCH v6 4/6] ethdev: make fast-path functions to use new flat array
2021-10-13 13:36 4% ` [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures Konstantin Ananyev
2021-10-13 13:37 6% ` [dpdk-dev] [PATCH v6 2/6] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-13 13:37 2% ` [dpdk-dev] [PATCH v6 3/6] ethdev: copy fast-path API into separate structure Konstantin Ananyev
@ 2021-10-13 13:37 2% ` Konstantin Ananyev
2021-10-13 13:37 8% ` [dpdk-dev] [PATCH v6 6/6] ethdev: hide eth dev related structures Konstantin Ananyev
3 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-13 13:37 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Rework fast-path ethdev functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/ethdev/ethdev_private.c | 31 +++++
lib/ethdev/rte_ethdev.h | 270 +++++++++++++++++++++++++-----------
lib/ethdev/version.map | 3 +
3 files changed, 226 insertions(+), 78 deletions(-)
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index d810c3a1d4..c905c2df6f 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->txq.data = dev->data->tx_queues;
fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
}
+
+uint16_t
+rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+ void *opaque)
+{
+ const struct rte_eth_rxtx_callback *cb = opaque;
+
+ while (cb != NULL) {
+ nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+ nb_pkts, cb->param);
+ cb = cb->next;
+ }
+
+ return nb_rx;
+}
+
+uint16_t
+rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+ const struct rte_eth_rxtx_callback *cb = opaque;
+
+ while (cb != NULL) {
+ nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+ cb->param);
+ cb = cb->next;
+ }
+
+ return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 4007bd0e73..f4c92b3b5e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4884,6 +4884,33 @@ int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
#include <rte_ethdev_core.h>
+/**
+ * @internal
+ * Helper routine for rte_eth_rx_burst().
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes Rx callbacks if any, etc.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ * The address of an array of pointers to *rte_mbuf* structures that
+ * have been retrieved from the device.
+ * @param nb_rx
+ * The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ * The number of elements in @p rx_pkts array.
+ * @param opaque
+ * Opaque pointer of Rx queue callback related data.
+ *
+ * @return
+ * The number of packets effectively supplied to the @p rx_pkts array.
+ */
+uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+ void *opaque);
+
/**
*
* Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4975,39 +5002,51 @@ static inline uint16_t
rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
{
- struct rte_eth_dev *dev = &rte_eth_devices[port_id];
uint16_t nb_rx;
+ struct rte_eth_fp_ops *p;
+ void *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_RX
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
- if (queue_id >= dev->data->nb_rx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
return 0;
}
#endif
- nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
- rx_pkts, nb_pkts);
-#ifdef RTE_ETHDEV_RXTX_CALLBACKS
- struct rte_eth_rxtx_callback *cb;
+ nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
- /* __ATOMIC_RELEASE memory order was used when the
- * call back was inserted into the list.
- * Since there is a clear dependency between loading
- * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
- * not required.
- */
- cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
+#ifdef RTE_ETHDEV_RXTX_CALLBACKS
+ {
+ void *cb;
+
+ /* __ATOMIC_RELEASE memory order was used when the
+ * call back was inserted into the list.
+ * Since there is a clear dependency between loading
+ * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
+ * not required.
+ */
+ cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id],
__ATOMIC_RELAXED);
-
- if (unlikely(cb != NULL)) {
- do {
- nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
- nb_pkts, cb->param);
- cb = cb->next;
- } while (cb != NULL);
+ if (unlikely(cb != NULL))
+ nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id,
+ rx_pkts, nb_rx, nb_pkts, cb);
}
#endif
@@ -5031,16 +5070,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
static inline int
rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
{
- struct rte_eth_dev *dev;
+ struct rte_eth_fp_ops *p;
+ void *qd;
+
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
- dev = &rte_eth_devices[port_id];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
- if (queue_id >= dev->data->nb_rx_queues ||
- dev->data->rx_queues[queue_id] == NULL)
+ RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+ if (qd == NULL)
return -EINVAL;
- return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+ return (int)(*p->rx_queue_count)(qd);
}
/**@{@name Rx hardware descriptor states
@@ -5088,21 +5138,30 @@ static inline int
rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
uint16_t offset)
{
- struct rte_eth_dev *dev;
- void *rxq;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_RX
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
#endif
- dev = &rte_eth_devices[port_id];
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
+
#ifdef RTE_ETHDEV_DEBUG_RX
- if (queue_id >= dev->data->nb_rx_queues)
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (qd == NULL)
return -ENODEV;
#endif
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
- rxq = dev->data->rx_queues[queue_id];
-
- return (*dev->rx_descriptor_status)(rxq, offset);
+ RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+ return (*p->rx_descriptor_status)(qd, offset);
}
/**@{@name Tx hardware descriptor states
@@ -5149,23 +5208,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
uint16_t queue_id, uint16_t offset)
{
- struct rte_eth_dev *dev;
- void *txq;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_TX
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
#endif
- dev = &rte_eth_devices[port_id];
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
+
#ifdef RTE_ETHDEV_DEBUG_TX
- if (queue_id >= dev->data->nb_tx_queues)
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (qd == NULL)
return -ENODEV;
#endif
- RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
- txq = dev->data->tx_queues[queue_id];
-
- return (*dev->tx_descriptor_status)(txq, offset);
+ RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+ return (*p->tx_descriptor_status)(qd, offset);
}
+/**
+ * @internal
+ * Helper routine for rte_eth_tx_burst().
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes Tx callbacks if any, etc.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The index of the transmit queue through which output packets must be
+ * sent.
+ * @param tx_pkts
+ * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ * which contain the output packets.
+ * @param nb_pkts
+ * The maximum number of packets to transmit.
+ * @return
+ * The number of output packets to transmit.
+ */
+uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
/**
* Send a burst of output packets on a transmit queue of an Ethernet device.
*
@@ -5236,42 +5326,55 @@ static inline uint16_t
rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct rte_eth_fp_ops *p;
+ void *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_TX
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
- if (queue_id >= dev->data->nb_tx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
return 0;
}
#endif
#ifdef RTE_ETHDEV_RXTX_CALLBACKS
- struct rte_eth_rxtx_callback *cb;
-
- /* __ATOMIC_RELEASE memory order was used when the
- * call back was inserted into the list.
- * Since there is a clear dependency between loading
- * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
- * not required.
- */
- cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
+ {
+ void *cb;
+
+ /* __ATOMIC_RELEASE memory order was used when the
+ * call back was inserted into the list.
+ * Since there is a clear dependency between loading
+ * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
+ * not required.
+ */
+ cb = __atomic_load_n((void **)&p->txq.clbk[queue_id],
__ATOMIC_RELAXED);
-
- if (unlikely(cb != NULL)) {
- do {
- nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
- cb->param);
- cb = cb->next;
- } while (cb != NULL);
+ if (unlikely(cb != NULL))
+ nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id,
+ tx_pkts, nb_pkts, cb);
}
#endif
- rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
- nb_pkts);
- return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+ nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+ rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+ return nb_pkts;
}
/**
@@ -5334,31 +5437,42 @@ static inline uint16_t
rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct rte_eth_dev *dev;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_TX
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
rte_errno = ENODEV;
return 0;
}
#endif
- dev = &rte_eth_devices[port_id];
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_TX
- if (queue_id >= dev->data->nb_tx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+ if (!rte_eth_dev_is_valid_port(port_id)) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+ rte_errno = ENODEV;
+ return 0;
+ }
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
rte_errno = EINVAL;
return 0;
}
#endif
- if (!dev->tx_pkt_prepare)
+ if (!p->tx_pkt_prepare)
return nb_pkts;
- return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
- tx_pkts, nb_pkts);
+ return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
}
#else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 29fb71f1af..61011b110a 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -7,6 +7,8 @@ DPDK_22 {
rte_eth_allmulticast_disable;
rte_eth_allmulticast_enable;
rte_eth_allmulticast_get;
+ rte_eth_call_rx_callbacks;
+ rte_eth_call_tx_callbacks;
rte_eth_dev_adjust_nb_rx_tx_desc;
rte_eth_dev_callback_register;
rte_eth_dev_callback_unregister;
@@ -76,6 +78,7 @@ DPDK_22 {
rte_eth_find_next_of;
rte_eth_find_next_owned_by;
rte_eth_find_next_sibling;
+ rte_eth_fp_ops;
rte_eth_iterator_cleanup;
rte_eth_iterator_init;
rte_eth_iterator_next;
--
2.26.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v6 3/6] ethdev: copy fast-path API into separate structure
2021-10-13 13:36 4% ` [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures Konstantin Ananyev
2021-10-13 13:37 6% ` [dpdk-dev] [PATCH v6 2/6] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-13 13:37 2% ` Konstantin Ananyev
2021-10-13 14:25 0% ` Andrew Rybchenko
2021-10-13 13:37 2% ` [dpdk-dev] [PATCH v6 4/6] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
2021-10-13 13:37 8% ` [dpdk-dev] [PATCH v6 6/6] ethdev: hide eth dev related structures Konstantin Ananyev
3 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-13 13:37 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for fast-path calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
The whole idea beyond this new schema:
1. PMDs keep to setup fast-path function pointers and related data
inside rte_eth_dev struct in the same way they did it before.
2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
(for secondary process) we call eth_dev_fp_ops_setup, which
copies these function and data pointers into rte_eth_fp_ops[port_id].
3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
into some dummy values.
4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
flat array to call PMD specific functions.
That approach should allow us to make rte_eth_devices[] private
without introducing regression and help to avoid changes in drivers code.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/ethdev/ethdev_private.c | 52 +++++++++++++++++++++++++++++++
lib/ethdev/ethdev_private.h | 7 +++++
lib/ethdev/rte_ethdev.c | 27 +++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 59 ++++++++++++++++++++++++++++++++++++
4 files changed, 145 insertions(+)
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..d810c3a1d4 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
return str == NULL ? -1 : 0;
}
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+ __rte_unused struct rte_mbuf **rx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for not ready port\n");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+ __rte_unused struct rte_mbuf **tx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for not ready port\n");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+ static const struct rte_eth_fp_ops dummy_ops = {
+ .rx_pkt_burst = dummy_eth_rx_burst,
+ .tx_pkt_burst = dummy_eth_tx_burst,
+ .rxq = {.data = dummy_data, .clbk = dummy_data,},
+ .txq = {.data = dummy_data, .clbk = dummy_data,},
+ };
+
+ *fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+ const struct rte_eth_dev *dev)
+{
+ fpo->rx_pkt_burst = dev->rx_pkt_burst;
+ fpo->tx_pkt_burst = dev->tx_pkt_burst;
+ fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+ fpo->rx_queue_count = dev->rx_queue_count;
+ fpo->rx_descriptor_status = dev->rx_descriptor_status;
+ fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+ fpo->rxq.data = dev->data->rx_queues;
+ fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+ fpo->txq.data = dev->data->tx_queues;
+ fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..5721be7bdc 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
/* Parse devargs value for representor parameter. */
int rte_eth_devargs_parse_representor_ports(char *str, void *data);
+/* reset eth fast-path API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth fast-path API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+ const struct rte_eth_dev *dev);
+
#endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index e446f3b3f8..178f5b88b7 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
+/* public fast-path API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
/* spinlock for eth device callbacks */
static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
@@ -579,6 +582,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
rte_eth_dev_callback_process(eth_dev,
RTE_ETH_EVENT_DESTROY, NULL);
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
eth_dev->state = RTE_ETH_DEV_UNUSED;
@@ -1792,6 +1797,9 @@ rte_eth_dev_start(uint16_t port_id)
(*dev->dev_ops->link_update)(dev, 0);
}
+ /* expose selection of PMD fast-path functions */
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
rte_ethdev_trace_start(port_id);
return 0;
}
@@ -1814,6 +1822,9 @@ rte_eth_dev_stop(uint16_t port_id)
return 0;
}
+ /* point fast-path functions to dummy ones */
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
dev->data->dev_started = 0;
ret = (*dev->dev_ops->dev_stop)(dev);
rte_ethdev_trace_stop(port_id, ret);
@@ -4477,6 +4488,14 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx,
queue_idx, tx_rate));
}
+RTE_INIT(eth_dev_init_fp_ops)
+{
+ uint32_t i;
+
+ for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
RTE_INIT(eth_dev_init_cb_lists)
{
uint16_t i;
@@ -4645,6 +4664,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
if (dev == NULL)
return;
+ /*
+ * for secondary process, at that point we expect device
+ * to be already 'usable', so shared data and all function pointers
+ * for fast-path devops have to be setup properly inside rte_eth_dev.
+ */
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
dev->state = RTE_ETH_DEV_ATTACHED;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index af824ef890..df7168ca4b 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -50,6 +50,65 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
/**< @internal Check the status of a Tx descriptor */
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+ /** points to array of internal queue data pointers */
+ void **data;
+ /** points to array of queue callback data pointers */
+ void **clbk;
+};
+
+/**
+ * @internal
+ * fast-path ethdev functions and related data are hold in a flat array.
+ * One entry per ethdev.
+ * On 64-bit systems contents of this structure occupy exactly two 64B lines.
+ * On 32-bit systems contents of this structure fits into one 64B line.
+ */
+struct rte_eth_fp_ops {
+
+ /**@{*/
+ /**
+ * Rx fast-path functions and related data.
+ * 64-bit systems: occupies first 64B line
+ */
+ /** PMD receive function. */
+ eth_rx_burst_t rx_pkt_burst;
+ /** Get the number of used RX descriptors. */
+ eth_rx_queue_count_t rx_queue_count;
+ /** Check the status of a Rx descriptor. */
+ eth_rx_descriptor_status_t rx_descriptor_status;
+ /** Rx queues data. */
+ struct rte_ethdev_qdata rxq;
+ uintptr_t reserved1[3];
+ /**@}*/
+
+ /**@{*/
+ /**
+ * Tx fast-path functions and related data.
+ * 64-bit systems: occupies second 64B line
+ */
+ /** PMD transmit function. */
+ eth_tx_burst_t tx_pkt_burst;
+ /** PMD transmit prepare function. */
+ eth_tx_prep_t tx_pkt_prepare;
+ /** Check the status of a Tx descriptor. */
+ eth_tx_descriptor_status_t tx_descriptor_status;
+ /** Tx queues data. */
+ struct rte_ethdev_qdata txq;
+ uintptr_t reserved2[3];
+ /**@}*/
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
/**
* @internal
--
2.26.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v6 2/6] ethdev: change input parameters for rx_queue_count
2021-10-13 13:36 4% ` [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures Konstantin Ananyev
@ 2021-10-13 13:37 6% ` Konstantin Ananyev
2021-10-13 13:37 2% ` [dpdk-dev] [PATCH v6 3/6] ethdev: copy fast-path API into separate structure Konstantin Ananyev
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-13 13:37 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 ++++++
drivers/net/ark/ark_ethdev_rx.c | 4 ++--
drivers/net/ark/ark_ethdev_rx.h | 3 +--
drivers/net/atlantic/atl_ethdev.h | 2 +-
drivers/net/atlantic/atl_rxtx.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 8 +++++---
drivers/net/dpaa/dpaa_ethdev.c | 9 ++++-----
drivers/net/dpaa2/dpaa2_ethdev.c | 9 ++++-----
drivers/net/e1000/e1000_ethdev.h | 6 ++----
drivers/net/e1000/em_rxtx.c | 4 ++--
drivers/net/e1000/igb_rxtx.c | 4 ++--
drivers/net/enic/enic_ethdev.c | 12 ++++++------
drivers/net/fm10k/fm10k.h | 2 +-
drivers/net/fm10k/fm10k_rxtx.c | 4 ++--
drivers/net/hns3/hns3_rxtx.c | 7 +++++--
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/i40e/i40e_rxtx.c | 4 ++--
drivers/net/i40e/i40e_rxtx.h | 3 +--
drivers/net/iavf/iavf_rxtx.c | 4 ++--
drivers/net/iavf/iavf_rxtx.h | 2 +-
drivers/net/ice/ice_rxtx.c | 4 ++--
drivers/net/ice/ice_rxtx.h | 2 +-
drivers/net/igc/igc_txrx.c | 5 ++---
drivers/net/igc/igc_txrx.h | 3 +--
drivers/net/ixgbe/ixgbe_ethdev.h | 3 +--
drivers/net/ixgbe/ixgbe_rxtx.c | 4 ++--
drivers/net/mlx5/mlx5_rx.c | 26 ++++++++++++-------------
drivers/net/mlx5/mlx5_rx.h | 2 +-
drivers/net/netvsc/hn_rxtx.c | 4 ++--
drivers/net/netvsc/hn_var.h | 2 +-
drivers/net/nfp/nfp_rxtx.c | 4 ++--
drivers/net/nfp/nfp_rxtx.h | 3 +--
drivers/net/octeontx2/otx2_ethdev.h | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 8 ++++----
drivers/net/sfc/sfc_ethdev.c | 12 ++++++------
drivers/net/thunderx/nicvf_ethdev.c | 3 +--
drivers/net/thunderx/nicvf_rxtx.c | 4 ++--
drivers/net/thunderx/nicvf_rxtx.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.h | 3 +--
drivers/net/txgbe/txgbe_rxtx.c | 4 ++--
drivers/net/vhost/rte_eth_vhost.c | 4 ++--
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_core.h | 3 +--
43 files changed, 103 insertions(+), 110 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d5c762df62..bb884f5f32 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -299,6 +299,12 @@ ABI Changes
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+ Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+ to internal queue data as input parameter. While this change is transparent
+ to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+ is used by public inline function ``rte_eth_rx_queue_count``.
+
Known Issues
------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
}
uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
{
struct ark_rx_queue *queue;
- queue = dev->data->rx_queues[queue_id];
+ queue = rx_queue;
return (queue->prod_index - queue->cons_index); /* mod arith */
}
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index a2d1d4397c..fbc9917ed3 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index fca682d8b0..0d3460383a 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
/* Return Rx queue avail count */
uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
{
struct atl_rx_queue *rxq;
PMD_INIT_FUNC_TRACE();
- if (rx_queue_id >= dev->data->nb_rx_queues) {
- PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
- return 0;
- }
-
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
if (rxq == NULL)
return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index a98f93ab29..ebda74d02f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3154,20 +3154,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
}
static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
{
- struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+ struct bnxt *bp;
struct bnxt_cp_ring_info *cpr;
uint32_t desc = 0, raw_cons, cp_ring_size;
struct bnxt_rx_queue *rxq;
struct rx_pkt_cmpl *rxcmp;
int rc;
+ rxq = rx_queue;
+ bp = rxq->bp;
+
rc = is_bnxt_in_error(bp);
if (rc)
return rc;
- rxq = dev->data->rx_queues[rx_queue_id];
cpr = rxq->cp_ring;
raw_cons = cpr->cp_raw_cons;
cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 59f4a93b3e..c087ce6341 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1267,17 +1267,16 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
- struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+ struct qman_fq *rxq = rx_queue;
u32 frm_cnt = 0;
PMD_INIT_FUNC_TRACE();
if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
- DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
- rx_queue_id, frm_cnt);
+ DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+ rx_queue, frm_cnt);
}
return frm_cnt;
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index ff8ae89922..f2519f0fad 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1007,10 +1007,9 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
{
int32_t ret;
- struct dpaa2_dev_priv *priv = dev->data->dev_private;
struct dpaa2_queue *dpaa2_q;
struct qbman_swp *swp;
struct qbman_fq_query_np_rslt state;
@@ -1027,12 +1026,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
swp = DPAA2_PER_LCORE_PORTAL;
- dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+ dpaa2_q = rx_queue;
if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
frame_cnt = qbman_fq_state_frame_count(&state);
- DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
- rx_queue_id, frame_cnt);
+ DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+ rx_queue, frame_cnt);
}
return frame_cnt;
}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index dafd586b12..050852be79 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset);
int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
@@ -474,8 +473,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset);
int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 9994703cc2..506b4159a2 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1495,14 +1495,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
{
#define EM_RXQ_SCAN_INTERVAL 4
volatile struct e1000_rx_desc *rxdp;
struct em_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 7b2a6b0490..e04c2b41ab 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1776,14 +1776,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
{
#define IGB_RXQ_SCAN_INTERVAL 4
volatile union e1000_adv_rx_desc *rxdp;
struct igb_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index b03e56bc25..b94332cc86 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -237,18 +237,18 @@ static void enicpmd_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
enic_free_rq(rxq);
}
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
{
- struct enic *enic = pmd_priv(dev);
+ struct enic *enic;
+ struct vnic_rq *sop_rq;
uint32_t queue_count = 0;
struct vnic_cq *cq;
uint32_t cq_tail;
uint16_t cq_idx;
- int rq_num;
- rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
- cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+ sop_rq = rx_queue;
+ enic = vnic_dev_priv(sop_rq->vdev);
+ cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
cq_idx = cq->to_clean;
cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 2e47ada829..17c73c4dc5 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
int
fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index d9833505d1..b3515ae96a 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
{
#define FM10K_RXQ_SCAN_INTERVAL 4
volatile union fm10k_rx_desc *rxdp;
struct fm10k_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->hw_ring[rxq->next_dd];
while ((desc < rxq->nb_desc) &&
rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 70de0d2b58..02040b84f3 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4685,7 +4685,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
}
uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
{
/*
* Number of BDs that have been processed by the driver
@@ -4693,9 +4693,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
*/
uint32_t driver_hold_bd_num;
struct hns3_rx_queue *rxq;
+ const struct rte_eth_dev *dev;
uint32_t fbd_num;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
+ dev = &rte_eth_devices[rxq->port_id];
+
fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index bb309d38ed..c8229e9076 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
struct rte_mempool *mp);
int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index ab77ec04b5..3df4e3de18 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2121,14 +2121,14 @@ i40e_rx_queue_release(void *rxq)
}
uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
{
#define I40E_RXQ_SCAN_INTERVAL 4
volatile union i40e_rx_desc *rxdp;
struct i40e_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 7a24dd6be5..2301e6301d 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -229,8 +229,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88661e5d74..88bbd40c10 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
/* Get the number of used descriptors of a rx queue */
uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
{
#define IAVF_RXQ_SCAN_INTERVAL 4
volatile union iavf_rx_desc *rxdp;
struct iavf_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 9591e45cb0..f4ae2fd6e1 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 7a2220daa4..b06c2f1438 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1460,14 +1460,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
}
uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
{
#define ICE_RXQ_SCAN_INTERVAL 4
volatile union ice_rx_flex_desc *rxdp;
struct ice_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index c5ec6b7d1a..e1c644fb63 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -230,7 +230,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
void ice_set_tx_function_flag(struct rte_eth_dev *dev,
struct ice_tx_queue *txq);
void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index a66ce1d2b7..383bf834f3 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
igc_rx_queue_release(dev->data->rx_queues[qid]);
}
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
{
/**
* Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
struct igc_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index bb040366f0..535108a868 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index c01a74de89..950fb2d245 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -594,8 +594,7 @@ int ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0ac89cb711..4d3d30b662 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3262,14 +3262,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
{
#define IXGBE_RXQ_SCAN_INTERVAL 4
volatile union ixgbe_adv_rx_desc *rxdp;
struct ixgbe_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
/**
* DPDK callback to get the number of used descriptors in a RX queue.
*
- * @param dev
- * Pointer to the device structure.
- *
- * @param rx_queue_id
- * The Rx queue.
+ * @param rx_queue
+ * The Rx queue pointer.
*
* @return
* The number of used rx descriptor.
* -EINVAL if the queue is invalid
*/
uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq;
+ struct mlx5_rxq_data *rxq = rx_queue;
+ struct rte_eth_dev *dev;
+
+ if (!rxq) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+
+ dev = &rte_eth_devices[rxq->port_id];
if (dev->rx_pkt_burst == NULL ||
dev->rx_pkt_burst == removed_rx_burst) {
rte_errno = ENOTSUP;
return -rte_errno;
}
- rxq = (*priv->rxqs)[rx_queue_id];
- if (!rxq) {
- rte_errno = EINVAL;
- return -rte_errno;
- }
+
return rx_queue_count(rxq);
}
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 2b7ad3e48b..bdc48f3d9f 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index e880dc2bb2..f8fff1bcd1 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
* For this device that means how many packets are pending in the ring.
*/
uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
{
- struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+ struct hn_rx_queue *rxq = rx_queue;
return rte_ring_count(rxq->rx_ring);
}
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2cd1f8a881..18703f99b9 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
void hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void hn_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
int hn_dev_rx_queue_status(void *rxq, uint16_t offset);
void hn_dev_free_queues(struct rte_eth_dev *dev);
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index feeacb5614..733f81e4b2 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
}
uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
{
struct nfp_net_rxq *rxq;
struct nfp_net_rx_desc *rxds;
uint32_t idx;
uint32_t count;
- rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+ rxq = rx_queue;
idx = rxq->rd_p;
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index ab49898605..1a813ded15 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
} __rte_aligned(64);
int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
void nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 0d73013433..7a8d19d541 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5cb3905b64..3a763f691b 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
}
uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_rxq *rxq = rx_queue;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
uint32_t head, tail;
- nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+ nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
return (tail - head) % rxq->qlen;
}
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index c0d9810fbb..603af6242d 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1356,19 +1356,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
* use any process-local pointers from the adapter data.
*/
static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
{
- const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
- struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
- sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+ struct sfc_dp_rxq *dp_rxq = rx_queue;
+ const struct sfc_dp_rx *dp_rx;
struct sfc_rxq_info *rxq_info;
- rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+ dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+ rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
return 0;
- return sap->dp_rx->qdesc_npending(rxq_info->dp);
+ return dp_rx->qdesc_npending(dp_rxq);
}
/*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5502f1ee69..2103f96d5e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
if (dev->rx_pkt_burst == NULL)
return;
- while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
- nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+ while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
NICVF_MAX_RX_FREE_THRESH);
PMD_DRV_LOG(INFO, "nb_pkts=%d rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
}
uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
{
struct nicvf_rxq *rxq;
- rxq = dev->data->rx_queues[queue_idx];
+ rxq = rx_queue;
return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
*(uint64_t *)(&pkt->rearm_data) = init.value;
}
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 112567eecc..528f11439b 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -438,8 +438,7 @@ int txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index b6339fe50b..4849fb385e 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
{
#define TXGBE_RXQ_SCAN_INTERVAL 4
volatile struct txgbe_rx_desc *rxdp;
struct txgbe_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 2e24e5f7ff..a7935a716d 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1375,11 +1375,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
}
static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
{
struct vhost_queue *vq;
- vq = dev->data->rx_queues[rx_queue_id];
+ vq = rx_queue;
if (vq == NULL)
return 0;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index cb847a2c38..4007bd0e73 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5040,7 +5040,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
dev->data->rx_queues[queue_id] == NULL)
return -EINVAL;
- return (int)(*dev->rx_queue_count)(dev, queue_id);
+ return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
}
/**@{@name Rx hardware descriptor states
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 8aae713af6..af824ef890 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
/**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
/**< @internal Get number of used descriptors on a receive queue. */
typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
--
2.26.3
^ permalink raw reply [relevance 6%]
* [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures
[not found] <0211007112750.25526-1-konstantin.ananyev@intel.com>
@ 2021-10-13 13:36 4% ` Konstantin Ananyev
2021-10-13 13:37 6% ` [dpdk-dev] [PATCH v6 2/6] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
` (3 more replies)
0 siblings, 4 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-13 13:36 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
v6 changes:
- Update comments (Andrew)
- Move callback related variables under corresponding ifdefs (Andrew)
- Few nits in rte_eth_macaddrs_get (Andrew)
- Rebased on top of next-net tree
v5 changes:
- Fix spelling (Thomas/David)
- Rename internal helper functions (David)
- Reorder patches and update commit messages (Thomas)
- Update comments (Thomas)
- Changed layout in rte_eth_fp_ops, to group functions and
related data based on their functionality:
first 64B line for Rx, second one for Tx.
Didn't observe any real performance difference comparing to
original layout. Though decided to keep a new one, as it seems
a bit more plausible.
v4 changes:
- Fix secondary process attach (Pavan)
- Fix build failure (Ferruh)
- Update lib/ethdev/verion.map (Ferruh)
Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
section makes checkpatch.sh to complain.
v3 changes:
- Changes in public struct naming (Jerin/Haiyue)
- Split patches
- Update docs
- Shamelessly included Andrew's patch:
https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
into these series.
I have to do similar thing here, so decided to avoid duplicated effort.
The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.
The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
related data pointer from rte_eth_dev into a separate flat array.
We keep it public to still be able to use inline functions for these
'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
Note that apart from function pointers itself, each element of this
flat array also contains two opaque pointers for each ethdev:
1) a pointer to an array of internal queue data pointers
2) points to array of queue callback data pointers.
Note that exposing this extra information allows us to avoid extra
changes inside PMD level, plus should help to avoid possible
performance degradation.
2. Change implementation of 'fast' inline ethdev functions
(rte_eth_rx_burst(), etc.) to use new public flat array.
While it is an ABI breakage, this change is intended to be transparent
for both users (no changes in user app is required) and PMD developers
(no changes in PMD is required).
One extra note - with new implementation RX/TX callback invocation
will cost one extra function call with this changes. That might cause
some slowdown for code-path with RX/TX callbacks heavily involved.
Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
things into internal header: <ethdev_driver.h>.
That approach was selected to:
- Avoid(/minimize) possible performance losses.
- Minimize required changes inside PMDs.
Performance testing results (ICX 2.0GHz, E810 (ice)):
- testpmd macswap fwd mode, plus
a) no RX/TX callbacks:
no actual slowdown observed
b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
~2% slowdown
- l3fwd: no actual slowdown observed
Would like to thank everyone who already reviewed and tested previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.
Konstantin Ananyev (6):
ethdev: allocate max space for internal queue array
ethdev: change input parameters for rx_queue_count
ethdev: copy fast-path API into separate structure
ethdev: make fast-path functions to use new flat array
ethdev: add API to retrieve multiple ethernet addresses
ethdev: hide eth dev related structures
app/test-pmd/config.c | 23 +-
doc/guides/rel_notes/release_21_11.rst | 17 +
drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/net/ark/ark_ethdev_rx.c | 4 +-
drivers/net/ark/ark_ethdev_rx.h | 3 +-
drivers/net/atlantic/atl_ethdev.h | 2 +-
drivers/net/atlantic/atl_rxtx.c | 9 +-
drivers/net/bnxt/bnxt_ethdev.c | 8 +-
drivers/net/cxgbe/base/adapter.h | 2 +-
drivers/net/dpaa/dpaa_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
drivers/net/e1000/e1000_ethdev.h | 6 +-
drivers/net/e1000/em_rxtx.c | 4 +-
drivers/net/e1000/igb_rxtx.c | 4 +-
drivers/net/enic/enic_ethdev.c | 12 +-
drivers/net/fm10k/fm10k.h | 2 +-
drivers/net/fm10k/fm10k_rxtx.c | 4 +-
drivers/net/hns3/hns3_rxtx.c | 7 +-
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/i40e/i40e_rxtx.c | 4 +-
drivers/net/i40e/i40e_rxtx.h | 3 +-
drivers/net/iavf/iavf_rxtx.c | 4 +-
drivers/net/iavf/iavf_rxtx.h | 2 +-
drivers/net/ice/ice_rxtx.c | 4 +-
drivers/net/ice/ice_rxtx.h | 2 +-
drivers/net/igc/igc_txrx.c | 5 +-
drivers/net/igc/igc_txrx.h | 3 +-
drivers/net/ixgbe/ixgbe_ethdev.h | 3 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 4 +-
drivers/net/mlx5/mlx5_rx.c | 26 +-
drivers/net/mlx5/mlx5_rx.h | 2 +-
drivers/net/netvsc/hn_rxtx.c | 4 +-
drivers/net/netvsc/hn_var.h | 3 +-
drivers/net/nfp/nfp_rxtx.c | 4 +-
drivers/net/nfp/nfp_rxtx.h | 3 +-
drivers/net/octeontx2/otx2_ethdev.h | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 8 +-
drivers/net/sfc/sfc_ethdev.c | 12 +-
drivers/net/thunderx/nicvf_ethdev.c | 3 +-
drivers/net/thunderx/nicvf_rxtx.c | 4 +-
drivers/net/thunderx/nicvf_rxtx.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.h | 3 +-
drivers/net/txgbe/txgbe_rxtx.c | 4 +-
drivers/net/vhost/rte_eth_vhost.c | 4 +-
lib/ethdev/ethdev_driver.h | 154 +++++++++
lib/ethdev/ethdev_private.c | 83 +++++
lib/ethdev/ethdev_private.h | 7 +
lib/ethdev/rte_ethdev.c | 90 ++++--
lib/ethdev/rte_ethdev.h | 291 +++++++++++++-----
lib/ethdev/rte_ethdev_core.h | 177 +++--------
lib/ethdev/version.map | 6 +-
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/metrics/rte_metrics_telemetry.c | 2 +-
57 files changed, 694 insertions(+), 373 deletions(-)
--
2.26.3
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v9 03/10] security: add UDP params for IPsec NAT-T
2021-10-13 12:13 5% ` [dpdk-dev] [PATCH v9 01/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-13 12:13 5% ` Radu Nicolau
1 sibling, 0 replies; 200+ results
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 5 ++---
doc/guides/rel_notes/release_21_11.rst | 4 ++++
lib/security/rte_security.h | 7 +++++++
3 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index accb9c7d83..6517e7821f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -204,9 +204,8 @@ Deprecation Notices
pointer for the private data to the application which can be attached
to the packet while enqueuing.
-* security: The structure ``rte_security_ipsec_xform`` will be extended with
- multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size).
+* security: The structure ``rte_security_ipsec_xform`` will be extended with:
+ new field: IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like TSO in case of
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 544d44b1a8..1748c2db05 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -274,6 +274,10 @@ ABI Changes
application to start from an arbitrary ESN value for debug and SA lifetime
enforcement purposes.
+* security: A new structure ``udp`` was added in structure
+ ``rte_security_ipsec_xform`` to allow setting the source and destination ports
+ for UDP encapsulated IPsec traffic.
+
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 764ce83bca..17d0e95412 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -128,6 +128,11 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -326,6 +331,8 @@ struct rte_security_ipsec_xform {
};
} esn;
/**< Extended Sequence Number */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
};
/**
--
2.25.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v9 01/10] security: add ESN field to ipsec_xform
@ 2021-10-13 12:13 5% ` Radu Nicolau
2021-10-13 12:13 5% ` [dpdk-dev] [PATCH v9 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
1 sibling, 0 replies; 200+ results
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 2 +-
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 8 ++++++++
3 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a4e86b31f5..accb9c7d83 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -206,7 +206,7 @@ Deprecation Notices
* security: The structure ``rte_security_ipsec_xform`` will be extended with
multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
+ IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like TSO in case of
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 54718ff367..f840586a20 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -265,6 +265,11 @@ ABI Changes
packet IPv4 header checksum and L4 checksum need to be offloaded to
security device.
+* security: A new structure ``esn`` was added in structure
+ ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
+ application to start from an arbitrary ESN value for debug and SA lifetime
+ enforcement purposes.
+
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 7eb9f109ae..764ce83bca 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -318,6 +318,14 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH] pipeline: remove experimental tag from API
2021-10-13 11:42 0% ` Kinsella, Ray
@ 2021-10-13 11:58 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-13 11:58 UTC (permalink / raw)
To: Bruce Richardson, Kinsella, Ray, Dumitrescu, Cristian, Singh, Jasvinder
Cc: dev, Zhang, Roy Fan, david.marchand
13/10/2021 13:42, Kinsella, Ray:
> On 13/10/2021 12:11, Bruce Richardson wrote:
> > On Wed, Oct 13, 2021 at 11:02:02AM +0100, Kinsella, Ray wrote:
> >> On 13/10/2021 10:49, Thomas Monjalon wrote:
> >>> 13/10/2021 11:43, Kinsella, Ray:
> >>>> On 13/10/2021 10:40, Thomas Monjalon wrote:
> >>>>> 13/10/2021 10:51, Kinsella, Ray:
> >>>>>> On 12/10/2021 22:52, Thomas Monjalon wrote:
> >>>>>>> 12/10/2021 22:34, Dumitrescu, Cristian:
> >>>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>>>>>> 01/09/2021 14:20, Jasvinder Singh:
> >>>>>>>>>> These APIs were introduced in 18.05, therefore removing
> >>>>>>>>>> experimental tag to promote them to stable state.
> >>>>>>>>>>
> >>>>>>>>>> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> >>>>>>>>>> ---
> >>>>>>>>>> lib/pipeline/rte_port_in_action.h | 10 ----------
> >>>>>>>>>> lib/pipeline/rte_table_action.h | 18 ------------------
> >>>>>>>>>> lib/pipeline/version.map | 16 ++++++----------
> >>>>>>>>>> 3 files changed, 6 insertions(+), 38 deletions(-)
> >>>>>>>>>
> >>>>>>>>> Cristian, please can you check whether you intend to keep these functions in
> >>>>>>>>> future?
> >>>>>>>>> If they are candidate to be removed, there is no point to promote them.
> >>>>>>>>
> >>>>>>>> Hi Thomas,
> >>>>>>>>
> >>>>>>>> Yes, they are candidate for removal, as the new rte_swx_pipeline API evolves.
> >>>>>>>>
> >>>>>>>> But removing them requires updating the drivers/net/softnic code to use the new API, which is not going to be completed in time for release 21.11.
> >>>>>>>>
> >>>>>>>> So given this lag, it might be better to simply promote these functions to stable API now, as Ray suggests, instead of continuing to keep them experimental; then, once these functions are no longer used, then we can remove them, most likely in 22.11.
> >>>>>>>>
> >>>>>>>> So I will ack these patches, but I am willing to reconsider if you feel strongly against this approach.
> >>>>>>>
> >>>>>>> I think we should not promote API that we know will disappear soon.
> >>>>>>> The stable status means something for the users.
> >>>>>>> Ray, what is your opinion?
> >>>>>>>
> >>>>>>
> >>>>>> Well - I agree with Cristian (he and I discuss this a few weeks ago).
> >>>>>> My position is if you are going to maintain an API, that means giving a few guarantees.
> >>>>>> The API's have been experimental for 3 years ... at what point do they mature?
> >>>>>>
> >>>>>> However, I agree there is two ways to look at this thing, I try to be pragmatic.
> >>>>>> Maturing of any ABI/API is a conversation between a maintainer and the contributor.
> >>>>>> If they strongly feel, it is a pointless exercise - I won't argue.
> >>>>>
> >>>>> I think you did't get it.
> >>>>> This API will be removed soon.
> >>>>> That's why I think it doesn't make sense to make them stable, just before removing.
> >>>>>
> >>>>
> >>>> Nope, I got it 110%
> >>>> I reflected both my opinion as ABI Maintainer, and tried to be pragmatic about the situation.
> >>>>
> >>>> As I said "Maturing of any ABI/API is a conversation between a maintainer and the contributor.
> >>>> If they strongly feel, it is a pointless exercise - I won't argue."
> >>>
> >>> Sorry, I don't understand your position.
> >>> Do you think we should promote functions to stable which are candidate to be removed soon?
> >>>
> >>
> >> I am just reflecting the policy here.
> >>
> >> "An API’s experimental status should be reviewed annually, by both the maintainer and/or the original contributor. Ordinarily APIs marked as experimental will be promoted to the stable ABI once a maintainer has become satisfied that the API is mature and is unlikely to change."
> >>
> > If an API is planned for removal, then I think it falls under the bucket of
> > "likely to change", so should not be made non-experimental. Therefore I'd
> > agree with Thomas view on this - not so much that promoting them is
> > pointless, but I'd actually view it as harmful in encouraging use that will
> > be broken in future.
> >
>
> To be clear (again).
>
> I don't think we should promote functions needlessly, as I said, if others decide it is pointless, I won't argue.
> I do think if we have a policy, that experimental symbols will mature or be removed, we should be careful about the exceptions we make, lest the policy becomes irrelevant and ignored.
>
> Since we have argued this out, endlessly ... we can agree, we have been careful about this exception and move on?
The patch is set as rejected.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] pipeline: remove experimental tag from API
2021-10-13 11:11 0% ` Bruce Richardson
@ 2021-10-13 11:42 0% ` Kinsella, Ray
2021-10-13 11:58 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-13 11:42 UTC (permalink / raw)
To: Bruce Richardson
Cc: Thomas Monjalon, Dumitrescu, Cristian, dev, Zhang, Roy Fan,
Singh, Jasvinder, david.marchand
On 13/10/2021 12:11, Bruce Richardson wrote:
> On Wed, Oct 13, 2021 at 11:02:02AM +0100, Kinsella, Ray wrote:
>>
>>
>> On 13/10/2021 10:49, Thomas Monjalon wrote:
>>> 13/10/2021 11:43, Kinsella, Ray:
>>>> On 13/10/2021 10:40, Thomas Monjalon wrote:
>>>>> 13/10/2021 10:51, Kinsella, Ray:
>>>>>> On 12/10/2021 22:52, Thomas Monjalon wrote:
>>>>>>> 12/10/2021 22:34, Dumitrescu, Cristian:
>>>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>>>>>> 01/09/2021 14:20, Jasvinder Singh:
>>>>>>>>>> These APIs were introduced in 18.05, therefore removing
>>>>>>>>>> experimental tag to promote them to stable state.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
>>>>>>>>>> ---
>>>>>>>>>> lib/pipeline/rte_port_in_action.h | 10 ----------
>>>>>>>>>> lib/pipeline/rte_table_action.h | 18 ------------------
>>>>>>>>>> lib/pipeline/version.map | 16 ++++++----------
>>>>>>>>>> 3 files changed, 6 insertions(+), 38 deletions(-)
>>>>>>>>>
>>>>>>>>> Cristian, please can you check whether you intend to keep these functions in
>>>>>>>>> future?
>>>>>>>>> If they are candidate to be removed, there is no point to promote them.
>>>>>>>>
>>>>>>>> Hi Thomas,
>>>>>>>>
>>>>>>>> Yes, they are candidate for removal, as the new rte_swx_pipeline API evolves.
>>>>>>>>
>>>>>>>> But removing them requires updating the drivers/net/softnic code to use the new API, which is not going to be completed in time for release 21.11.
>>>>>>>>
>>>>>>>> So given this lag, it might be better to simply promote these functions to stable API now, as Ray suggests, instead of continuing to keep them experimental; then, once these functions are no longer used, then we can remove them, most likely in 22.11.
>>>>>>>>
>>>>>>>> So I will ack these patches, but I am willing to reconsider if you feel strongly against this approach.
>>>>>>>
>>>>>>> I think we should not promote API that we know will disappear soon.
>>>>>>> The stable status means something for the users.
>>>>>>> Ray, what is your opinion?
>>>>>>>
>>>>>>
>>>>>> Well - I agree with Cristian (he and I discuss this a few weeks ago).
>>>>>> My position is if you are going to maintain an API, that means giving a few guarantees.
>>>>>> The API's have been experimental for 3 years ... at what point do they mature?
>>>>>>
>>>>>> However, I agree there is two ways to look at this thing, I try to be pragmatic.
>>>>>> Maturing of any ABI/API is a conversation between a maintainer and the contributor.
>>>>>> If they strongly feel, it is a pointless exercise - I won't argue.
>>>>>
>>>>> I think you did't get it.
>>>>> This API will be removed soon.
>>>>> That's why I think it doesn't make sense to make them stable, just before removing.
>>>>>
>>>>
>>>> Nope, I got it 110%
>>>> I reflected both my opinion as ABI Maintainer, and tried to be pragmatic about the situation.
>>>>
>>>> As I said "Maturing of any ABI/API is a conversation between a maintainer and the contributor.
>>>> If they strongly feel, it is a pointless exercise - I won't argue."
>>>
>>> Sorry, I don't understand your position.
>>> Do you think we should promote functions to stable which are candidate to be removed soon?
>>>
>>
>> I am just reflecting the policy here.
>>
>> "An API’s experimental status should be reviewed annually, by both the maintainer and/or the original contributor. Ordinarily APIs marked as experimental will be promoted to the stable ABI once a maintainer has become satisfied that the API is mature and is unlikely to change."
>>
> If an API is planned for removal, then I think it falls under the bucket of
> "likely to change", so should not be made non-experimental. Therefore I'd
> agree with Thomas view on this - not so much that promoting them is
> pointless, but I'd actually view it as harmful in encouraging use that will
> be broken in future.
>
To be clear (again).
I don't think we should promote functions needlessly, as I said, if others decide it is pointless, I won't argue.
I do think if we have a policy, that experimental symbols will mature or be removed, we should be careful about the exceptions we make, lest the policy becomes irrelevant and ignored.
Since we have argued this out, endlessly ... we can agree, we have been careful about this exception and move on?
Ray K
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/1] ci: enable DPDK GHA for arm64 with self-hosted runners
2021-10-13 8:03 7% ` [dpdk-dev] [PATCH v1 1/1] " Serena He
@ 2021-10-13 11:32 0% ` Michael Santana
0 siblings, 0 replies; 200+ results
From: Michael Santana @ 2021-10-13 11:32 UTC (permalink / raw)
To: Serena He, aconole, maicolgabriel, david.marchand, thomas
Cc: dev, nd, honnappa.nagarahalli, ruifeng.wang, Dean.Arnold, stable
On 10/13/21 4:03 AM, Serena He wrote:
> CI jobs are triggered only for repos installed with given GHApp and runners
>
> Cc: stable@dpdk.org
>
> Signed-off-by: Serena He <serena.he@arm.com>
>
> ---
> .github/workflows/build-arm64.yml | 118 ++++++++++++++++++++++++++++++
> 1 file changed, 118 insertions(+)
> create mode 100644 .github/workflows/build-arm64.yml
>
> diff --git a/.github/workflows/build-arm64.yml b/.github/workflows/build-arm64.yml
> new file mode 100644
> index 0000000000..570563f7c8
> --- /dev/null
> +++ b/.github/workflows/build-arm64.yml
Adding a new workflow should work on our 0-day-bot. We now support
having multiple workflows so this looks good
> @@ -0,0 +1,118 @@
> +name: build-arm64
> +
> +on:
> + push:
> + schedule:
> + - cron: '0 0 * * 1'
nit: Please add a comment for when this is scheduled so we dont have to
do cron math :)
> +
> +defaults:
> + run:
> + shell: bash --noprofile --norc -exo pipefail {0}
> +
> +jobs:
> + build:
> + # Here, runners for arm64 are accessed by installed GitHub APP, thus will not be available by fork.
> + # you can change the following 'if' and 'runs-on' if you have your own runners installed.
> + # or request to get your repo on the whitelist to use GitHub APP and delete this 'if'.
I think I understand. I think you mean s/GitHub APP/GitHub/ . otherwise
I dont know what that is. From my understanding you had to request
special arm-based runners from github
Are DPDK/dpdk and ovsrobot/dpdk whitelisted to use the arm-based runners?
Maybe there was a thread about this in the past that I missed, but where
and how do you get these arm-based runners from github?
> + if: ${{ github.repository == 'DPDK/dpdk' || github.repository == 'ovsrobot/dpdk' }}
> + name: ${{ join(matrix.config.*, '-') }}
> + runs-on: ${{ matrix.config.os }}
> + env:
> + ABI_CHECKS: ${{ contains(matrix.config.checks, 'abi') }}
> + BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
> + CL: ${{ matrix.config.compiler == 'clang' }}
> + CC: ccache ${{ matrix.config.compiler }}
> + DEF_LIB: ${{ matrix.config.library }}
> + LIBABIGAIL_VERSION: libabigail-1.8
> + REF_GIT_TAG: none
> +
> + strategy:
> + fail-fast: false
> + matrix:
> + config:
> + - os: [self-hosted,arm-ubuntu-20.04]
> + compiler: gcc
> + library: static
> + - os: [self-hosted,arm-ubuntu-20.04]
> + compiler: gcc
> + library: shared
> + checks: doc+tests
> + - os: [self-hosted,arm-ubuntu-20.04]
> + compiler: clang
> + library: static
> + - os: [self-hosted,arm-ubuntu-20.04]
> + compiler: clang
> + library: shared
> + checks: doc+tests
> +
> + steps:
> + - name: Checkout sources
> + uses: actions/checkout@v2
> + - name: Generate cache keys
> + id: get_ref_keys
> + run: |
> + echo -n '::set-output name=ccache::'
> + echo 'ccache-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-'$(date -u +%Y-w%W)
> + echo -n '::set-output name=libabigail::'
> + echo 'libabigail-${{ matrix.config.os }}'
> + echo -n '::set-output name=abi::'
> + echo 'abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.LIBABIGAIL_VERSION }}-${{ env.REF_GIT_TAG }}'
> + - name: Retrieve ccache cache
> + uses: actions/cache@v2
> + with:
> + path: ~/.ccache
> + key: ${{ steps.get_ref_keys.outputs.ccache }}-${{ github.ref }}
> + restore-keys: |
> + ${{ steps.get_ref_keys.outputs.ccache }}-refs/heads/main
> + - name: Retrieve libabigail cache
> + id: libabigail-cache
> + uses: actions/cache@v2
> + if: env.ABI_CHECKS == 'true'
> + with:
> + path: libabigail
> + key: ${{ steps.get_ref_keys.outputs.libabigail }}
> + - name: Retrieve ABI reference cache
> + uses: actions/cache@v2
> + if: env.ABI_CHECKS == 'true'
> + with:
> + path: reference
> + key: ${{ steps.get_ref_keys.outputs.abi }}
> + - name: Update APT cache
> + run: sudo apt update || true
> + - name: Install packages
> + run: sudo apt install -y ccache libnuma-dev python3-setuptools
> + python3-wheel python3-pip python3-pyelftools ninja-build libbsd-dev
> + libpcap-dev libibverbs-dev libcrypto++-dev libfdt-dev libjansson-dev
> + libarchive-dev zlib1g-dev pkgconf
> + - name: Install libabigail build dependencies if no cache is available
> + if: env.ABI_CHECKS == 'true' && steps.libabigail-cache.outputs.cache-hit != 'true'
> + run: sudo apt install -y autoconf automake libtool pkg-config libxml2-dev
> + libdw-dev
Lots of caching stuff. All of it needed?
> +
> + - name: Install test tools packages
> + run: sudo apt install -y gdb
> + - name: Install doc generation packages
> + if: env.BUILD_DOCS == 'true'
> + run: sudo apt install -y doxygen graphviz python3-sphinx
> + python3-sphinx-rtd-theme
> + - name: Run setup
> + run: |
> + .ci/linux-setup.sh
> + # Workaround on $HOME permissions as EAL checks them for plugin loading
> + chmod o-w $HOME
> + - name: Install clang
> + if: env.CL == 'true'
> + run: sudo apt install -y clang
> + - name: Build and test
> + run: .ci/linux-build.sh
> + - name: Upload logs on failure
> + if: failure()
> + uses: actions/upload-artifact@v2
> + with:
> + name: meson-logs-${{ join(matrix.config.*, '-') }}
> + path: |
> + build/meson-logs/testlog.txt
> + build/.ninja_log
> + build/meson-logs/meson-log.txt
> + build/gdb.log
LGTM!
> +
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mempool: fix name size in mempool structure
2021-10-13 11:07 4% ` David Marchand
@ 2021-10-13 11:14 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-13 11:14 UTC (permalink / raw)
To: David Marchand
Cc: Olivier Matz, Zoltan Kiss, dev, Ray Kinsella, Thomas Monjalon,
Konstantin Ananyev, Honnappa Nagarahalli
On 10/13/21 2:07 PM, David Marchand wrote:
> On Wed, Oct 13, 2021 at 10:57 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> Use correct define as a name array size.
>>
>> The change breaks ABI and therefore cannot be backported to
>> stable branches.
>>
>> Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data types")
>>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> Reviewed-by: David Marchand <david.marchand@redhat.com>
>
> Good catch, I guess we can clean ring too, quick grep:
>
> lib/ring/rte_ring_core.h-struct rte_ring {
> lib/ring/rte_ring_core.h- /*
> lib/ring/rte_ring_core.h: * Note: this field kept the
> RTE_MEMZONE_NAMESIZE size due to ABI
> lib/ring/rte_ring_core.h- * compatibility requirements, it
> could be changed to RTE_RING_NAMESIZE
> lib/ring/rte_ring_core.h: * next time the ABI changes
> lib/ring/rte_ring_core.h- */
> lib/ring/rte_ring_core.h- char name[RTE_MEMZONE_NAMESIZE]
> __rte_cache_aligned;
>
>
Yes. I've not bothered to grep... Cc maintainers.
@David, @Konstantin, or @Honnappa, will you send a patch or
should I do?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] pipeline: remove experimental tag from API
2021-10-13 10:02 4% ` Kinsella, Ray
@ 2021-10-13 11:11 0% ` Bruce Richardson
2021-10-13 11:42 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-10-13 11:11 UTC (permalink / raw)
To: Kinsella, Ray
Cc: Thomas Monjalon, Dumitrescu, Cristian, dev, Zhang, Roy Fan,
Singh, Jasvinder, david.marchand
On Wed, Oct 13, 2021 at 11:02:02AM +0100, Kinsella, Ray wrote:
>
>
> On 13/10/2021 10:49, Thomas Monjalon wrote:
> > 13/10/2021 11:43, Kinsella, Ray:
> >> On 13/10/2021 10:40, Thomas Monjalon wrote:
> >>> 13/10/2021 10:51, Kinsella, Ray:
> >>>> On 12/10/2021 22:52, Thomas Monjalon wrote:
> >>>>> 12/10/2021 22:34, Dumitrescu, Cristian:
> >>>>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>>>> 01/09/2021 14:20, Jasvinder Singh:
> >>>>>>>> These APIs were introduced in 18.05, therefore removing
> >>>>>>>> experimental tag to promote them to stable state.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> >>>>>>>> ---
> >>>>>>>> lib/pipeline/rte_port_in_action.h | 10 ----------
> >>>>>>>> lib/pipeline/rte_table_action.h | 18 ------------------
> >>>>>>>> lib/pipeline/version.map | 16 ++++++----------
> >>>>>>>> 3 files changed, 6 insertions(+), 38 deletions(-)
> >>>>>>>
> >>>>>>> Cristian, please can you check whether you intend to keep these functions in
> >>>>>>> future?
> >>>>>>> If they are candidate to be removed, there is no point to promote them.
> >>>>>>
> >>>>>> Hi Thomas,
> >>>>>>
> >>>>>> Yes, they are candidate for removal, as the new rte_swx_pipeline API evolves.
> >>>>>>
> >>>>>> But removing them requires updating the drivers/net/softnic code to use the new API, which is not going to be completed in time for release 21.11.
> >>>>>>
> >>>>>> So given this lag, it might be better to simply promote these functions to stable API now, as Ray suggests, instead of continuing to keep them experimental; then, once these functions are no longer used, then we can remove them, most likely in 22.11.
> >>>>>>
> >>>>>> So I will ack these patches, but I am willing to reconsider if you feel strongly against this approach.
> >>>>>
> >>>>> I think we should not promote API that we know will disappear soon.
> >>>>> The stable status means something for the users.
> >>>>> Ray, what is your opinion?
> >>>>>
> >>>>
> >>>> Well - I agree with Cristian (he and I discuss this a few weeks ago).
> >>>> My position is if you are going to maintain an API, that means giving a few guarantees.
> >>>> The API's have been experimental for 3 years ... at what point do they mature?
> >>>>
> >>>> However, I agree there is two ways to look at this thing, I try to be pragmatic.
> >>>> Maturing of any ABI/API is a conversation between a maintainer and the contributor.
> >>>> If they strongly feel, it is a pointless exercise - I won't argue.
> >>>
> >>> I think you did't get it.
> >>> This API will be removed soon.
> >>> That's why I think it doesn't make sense to make them stable, just before removing.
> >>>
> >>
> >> Nope, I got it 110%
> >> I reflected both my opinion as ABI Maintainer, and tried to be pragmatic about the situation.
> >>
> >> As I said "Maturing of any ABI/API is a conversation between a maintainer and the contributor.
> >> If they strongly feel, it is a pointless exercise - I won't argue."
> >
> > Sorry, I don't understand your position.
> > Do you think we should promote functions to stable which are candidate to be removed soon?
> >
>
> I am just reflecting the policy here.
>
> "An API’s experimental status should be reviewed annually, by both the maintainer and/or the original contributor. Ordinarily APIs marked as experimental will be promoted to the stable ABI once a maintainer has become satisfied that the API is mature and is unlikely to change."
>
If an API is planned for removal, then I think it falls under the bucket of
"likely to change", so should not be made non-experimental. Therefore I'd
agree with Thomas view on this - not so much that promoting them is
pointless, but I'd actually view it as harmful in encouraging use that will
be broken in future.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mempool: fix name size in mempool structure
2021-10-13 8:57 11% [dpdk-dev] [PATCH] mempool: fix name size in mempool structure Andrew Rybchenko
@ 2021-10-13 11:07 4% ` David Marchand
2021-10-13 11:14 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-13 11:07 UTC (permalink / raw)
To: Andrew Rybchenko
Cc: Olivier Matz, Zoltan Kiss, dev, Ray Kinsella, Thomas Monjalon
On Wed, Oct 13, 2021 at 10:57 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> Use correct define as a name array size.
>
> The change breaks ABI and therefore cannot be backported to
> stable branches.
>
> Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data types")
>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Good catch, I guess we can clean ring too, quick grep:
lib/ring/rte_ring_core.h-struct rte_ring {
lib/ring/rte_ring_core.h- /*
lib/ring/rte_ring_core.h: * Note: this field kept the
RTE_MEMZONE_NAMESIZE size due to ABI
lib/ring/rte_ring_core.h- * compatibility requirements, it
could be changed to RTE_RING_NAMESIZE
lib/ring/rte_ring_core.h: * next time the ABI changes
lib/ring/rte_ring_core.h- */
lib/ring/rte_ring_core.h- char name[RTE_MEMZONE_NAMESIZE]
__rte_cache_aligned;
--
David Marchand
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 2/4] mempool: add non-IO flag
@ 2021-10-13 11:01 4% ` Dmitry Kozlyuk
0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-13 11:01 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Matan Azrad, Olivier Matz
Mempool is a generic allocator that is not necessarily used for device
IO operations and its memory for DMA. Add MEMPOOL_F_NON_IO flag to mark
such mempools automatically if their objects are not contiguous
or IOVA are not available. Components can inspect this flag
in order to optimize their memory management.
Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
app/test/test_mempool.c | 76 ++++++++++++++++++++++++++
doc/guides/rel_notes/release_21_11.rst | 3 +
lib/mempool/rte_mempool.c | 2 +
lib/mempool/rte_mempool.h | 5 ++
4 files changed, 86 insertions(+)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index bc0cc9ed48..15146dd737 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -672,6 +672,74 @@ test_mempool_events_safety(void)
return 0;
}
+static int
+test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
+{
+ struct rte_mempool *mp;
+ int ret;
+
+ mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+ MEMPOOL_ELT_SIZE, 0, 0,
+ SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG);
+ RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+ rte_strerror(rte_errno));
+ rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
+ ret = rte_mempool_populate_default(mp);
+ RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+ rte_strerror(rte_errno));
+ RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+ "NON_IO flag is not set when NO_IOVA_CONTIG is set");
+ rte_mempool_free(mp);
+ return 0;
+}
+
+static int
+test_mempool_flag_non_io_set_when_populated_with_bad_iova(void)
+{
+ void *addr;
+ size_t size = 1 << 16;
+ struct rte_mempool *mp;
+ int ret;
+
+ addr = rte_malloc("test_mempool", size, 0);
+ RTE_TEST_ASSERT_NOT_NULL(addr, "Cannot allocate memory");
+ mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+ MEMPOOL_ELT_SIZE, 0, 0,
+ SOCKET_ID_ANY, 0);
+ RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+ rte_strerror(rte_errno));
+ ret = rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA, size,
+ NULL, NULL);
+ /* The flag must be inferred even if population isn't full. */
+ RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+ rte_strerror(rte_errno));
+ RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+ "NON_IO flag is not set when mempool is populated with RTE_BAD_IOVA");
+ rte_mempool_free(mp);
+ rte_free(addr);
+ return 0;
+}
+
+static int
+test_mempool_flag_non_io_unset_by_default(void)
+{
+ struct rte_mempool *mp;
+ int ret;
+
+ mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+ MEMPOOL_ELT_SIZE, 0, 0,
+ SOCKET_ID_ANY, 0);
+ RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+ rte_strerror(rte_errno));
+ ret = rte_mempool_populate_default(mp);
+ RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
+ rte_strerror(rte_errno));
+ RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+ "NON_IO flag is set by default");
+ rte_mempool_free(mp);
+ return 0;
+}
+
static int
test_mempool(void)
{
@@ -854,6 +922,14 @@ test_mempool(void)
if (test_mempool_events_safety() < 0)
GOTO_ERR(ret, err);
+ /* test NON_IO flag inference */
+ if (test_mempool_flag_non_io_unset_by_default() < 0)
+ GOTO_ERR(ret, err);
+ if (test_mempool_flag_non_io_set_when_no_iova_contig_set() < 0)
+ GOTO_ERR(ret, err);
+ if (test_mempool_flag_non_io_set_when_populated_with_bad_iova() < 0)
+ GOTO_ERR(ret, err);
+
rte_mempool_list_dump(stdout);
ret = 0;
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f643a61f44..74e0e6f495 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -226,6 +226,9 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+ that objects from this pool will not be used for device IO (e.g. DMA).
+
ABI Changes
-----------
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 51c0ba2931..2204f140b3 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -371,6 +371,8 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
mp->nb_mem_chunks++;
+ if (iova == RTE_BAD_IOVA)
+ mp->flags |= MEMPOOL_F_NON_IO;
/* Report the mempool as ready only when fully populated. */
if (mp->populated_size >= mp->size)
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 663123042f..029b62a650 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -262,6 +262,8 @@ struct rte_mempool {
#define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/
#define MEMPOOL_F_POOL_CREATED 0x0010 /**< Internal: pool is created. */
#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+#define MEMPOOL_F_NON_IO 0x0040
+ /**< Internal: pool is not usable for device IO (DMA). */
/**
* @internal When debug is enabled, store some statistics.
@@ -991,6 +993,9 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
* "single-consumer". Otherwise, it is "multi-consumers".
* - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
* necessarily be contiguous in IO memory.
+ * - MEMPOOL_F_NON_IO: If set, the mempool is considered to be
+ * never used for device IO, i.e. for DMA operations.
+ * It's a hint to other components and does not affect the mempool behavior.
* @return
* The pointer to the new allocated mempool, on success. NULL on error
* with rte_errno set appropriately. Possible rte_errno values include:
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] pipeline: remove experimental tag from API
2021-10-13 9:49 0% ` Thomas Monjalon
@ 2021-10-13 10:02 4% ` Kinsella, Ray
2021-10-13 11:11 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-13 10:02 UTC (permalink / raw)
To: Thomas Monjalon, Dumitrescu, Cristian
Cc: dev, Zhang, Roy Fan, Singh, Jasvinder, david.marchand
On 13/10/2021 10:49, Thomas Monjalon wrote:
> 13/10/2021 11:43, Kinsella, Ray:
>> On 13/10/2021 10:40, Thomas Monjalon wrote:
>>> 13/10/2021 10:51, Kinsella, Ray:
>>>> On 12/10/2021 22:52, Thomas Monjalon wrote:
>>>>> 12/10/2021 22:34, Dumitrescu, Cristian:
>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>>>> 01/09/2021 14:20, Jasvinder Singh:
>>>>>>>> These APIs were introduced in 18.05, therefore removing
>>>>>>>> experimental tag to promote them to stable state.
>>>>>>>>
>>>>>>>> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
>>>>>>>> ---
>>>>>>>> lib/pipeline/rte_port_in_action.h | 10 ----------
>>>>>>>> lib/pipeline/rte_table_action.h | 18 ------------------
>>>>>>>> lib/pipeline/version.map | 16 ++++++----------
>>>>>>>> 3 files changed, 6 insertions(+), 38 deletions(-)
>>>>>>>
>>>>>>> Cristian, please can you check whether you intend to keep these functions in
>>>>>>> future?
>>>>>>> If they are candidate to be removed, there is no point to promote them.
>>>>>>
>>>>>> Hi Thomas,
>>>>>>
>>>>>> Yes, they are candidate for removal, as the new rte_swx_pipeline API evolves.
>>>>>>
>>>>>> But removing them requires updating the drivers/net/softnic code to use the new API, which is not going to be completed in time for release 21.11.
>>>>>>
>>>>>> So given this lag, it might be better to simply promote these functions to stable API now, as Ray suggests, instead of continuing to keep them experimental; then, once these functions are no longer used, then we can remove them, most likely in 22.11.
>>>>>>
>>>>>> So I will ack these patches, but I am willing to reconsider if you feel strongly against this approach.
>>>>>
>>>>> I think we should not promote API that we know will disappear soon.
>>>>> The stable status means something for the users.
>>>>> Ray, what is your opinion?
>>>>>
>>>>
>>>> Well - I agree with Cristian (he and I discuss this a few weeks ago).
>>>> My position is if you are going to maintain an API, that means giving a few guarantees.
>>>> The API's have been experimental for 3 years ... at what point do they mature?
>>>>
>>>> However, I agree there is two ways to look at this thing, I try to be pragmatic.
>>>> Maturing of any ABI/API is a conversation between a maintainer and the contributor.
>>>> If they strongly feel, it is a pointless exercise - I won't argue.
>>>
>>> I think you did't get it.
>>> This API will be removed soon.
>>> That's why I think it doesn't make sense to make them stable, just before removing.
>>>
>>
>> Nope, I got it 110%
>> I reflected both my opinion as ABI Maintainer, and tried to be pragmatic about the situation.
>>
>> As I said "Maturing of any ABI/API is a conversation between a maintainer and the contributor.
>> If they strongly feel, it is a pointless exercise - I won't argue."
>
> Sorry, I don't understand your position.
> Do you think we should promote functions to stable which are candidate to be removed soon?
>
I am just reflecting the policy here.
"An API’s experimental status should be reviewed annually, by both the maintainer and/or the original contributor. Ordinarily APIs marked as experimental will be promoted to the stable ABI once a maintainer has become satisfied that the API is mature and is unlikely to change."
then,
"The promotion or removal of symbols will typically form part of a conversation between the maintainer and the original contributor.".
As I said, in my email above ...
"Maturing of any ABI/API is a conversation between a maintainer and the contributor.
If they strongly feel, it is a pointless exercise [to make the symbols stable] - I won't argue.
Or to be _abundantly clear_ ...
I don't think we should promote functions needlessly, as I said, if others decide it is pointless, I won't argue.
I do think if we have a policy, that experimental symbols will mature or be removed, we should be careful about the exceptions we make, lest the policy becomes irrelevant and ignored.
Ray K
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] pipeline: remove experimental tag from API
2021-10-13 9:43 4% ` Kinsella, Ray
@ 2021-10-13 9:49 0% ` Thomas Monjalon
2021-10-13 10:02 4% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-13 9:49 UTC (permalink / raw)
To: Dumitrescu, Cristian, Kinsella, Ray
Cc: dev, Zhang, Roy Fan, Singh, Jasvinder, david.marchand
13/10/2021 11:43, Kinsella, Ray:
> On 13/10/2021 10:40, Thomas Monjalon wrote:
> > 13/10/2021 10:51, Kinsella, Ray:
> >> On 12/10/2021 22:52, Thomas Monjalon wrote:
> >>> 12/10/2021 22:34, Dumitrescu, Cristian:
> >>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>> 01/09/2021 14:20, Jasvinder Singh:
> >>>>>> These APIs were introduced in 18.05, therefore removing
> >>>>>> experimental tag to promote them to stable state.
> >>>>>>
> >>>>>> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> >>>>>> ---
> >>>>>> lib/pipeline/rte_port_in_action.h | 10 ----------
> >>>>>> lib/pipeline/rte_table_action.h | 18 ------------------
> >>>>>> lib/pipeline/version.map | 16 ++++++----------
> >>>>>> 3 files changed, 6 insertions(+), 38 deletions(-)
> >>>>>
> >>>>> Cristian, please can you check whether you intend to keep these functions in
> >>>>> future?
> >>>>> If they are candidate to be removed, there is no point to promote them.
> >>>>
> >>>> Hi Thomas,
> >>>>
> >>>> Yes, they are candidate for removal, as the new rte_swx_pipeline API evolves.
> >>>>
> >>>> But removing them requires updating the drivers/net/softnic code to use the new API, which is not going to be completed in time for release 21.11.
> >>>>
> >>>> So given this lag, it might be better to simply promote these functions to stable API now, as Ray suggests, instead of continuing to keep them experimental; then, once these functions are no longer used, then we can remove them, most likely in 22.11.
> >>>>
> >>>> So I will ack these patches, but I am willing to reconsider if you feel strongly against this approach.
> >>>
> >>> I think we should not promote API that we know will disappear soon.
> >>> The stable status means something for the users.
> >>> Ray, what is your opinion?
> >>>
> >>
> >> Well - I agree with Cristian (he and I discuss this a few weeks ago).
> >> My position is if you are going to maintain an API, that means giving a few guarantees.
> >> The API's have been experimental for 3 years ... at what point do they mature?
> >>
> >> However, I agree there is two ways to look at this thing, I try to be pragmatic.
> >> Maturing of any ABI/API is a conversation between a maintainer and the contributor.
> >> If they strongly feel, it is a pointless exercise - I won't argue.
> >
> > I think you did't get it.
> > This API will be removed soon.
> > That's why I think it doesn't make sense to make them stable, just before removing.
> >
>
> Nope, I got it 110%
> I reflected both my opinion as ABI Maintainer, and tried to be pragmatic about the situation.
>
> As I said "Maturing of any ABI/API is a conversation between a maintainer and the contributor.
> If they strongly feel, it is a pointless exercise - I won't argue."
Sorry, I don't understand your position.
Do you think we should promote functions to stable which are candidate to be removed soon?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] pipeline: remove experimental tag from API
2021-10-13 9:40 0% ` Thomas Monjalon
@ 2021-10-13 9:43 4% ` Kinsella, Ray
2021-10-13 9:49 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-13 9:43 UTC (permalink / raw)
To: Thomas Monjalon, Dumitrescu, Cristian
Cc: dev, Zhang, Roy Fan, Singh, Jasvinder, david.marchand
On 13/10/2021 10:40, Thomas Monjalon wrote:
> 13/10/2021 10:51, Kinsella, Ray:
>>
>> On 12/10/2021 22:52, Thomas Monjalon wrote:
>>> 12/10/2021 22:34, Dumitrescu, Cristian:
>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>> 01/09/2021 14:20, Jasvinder Singh:
>>>>>> These APIs were introduced in 18.05, therefore removing
>>>>>> experimental tag to promote them to stable state.
>>>>>>
>>>>>> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
>>>>>> ---
>>>>>> lib/pipeline/rte_port_in_action.h | 10 ----------
>>>>>> lib/pipeline/rte_table_action.h | 18 ------------------
>>>>>> lib/pipeline/version.map | 16 ++++++----------
>>>>>> 3 files changed, 6 insertions(+), 38 deletions(-)
>>>>>
>>>>> Cristian, please can you check whether you intend to keep these functions in
>>>>> future?
>>>>> If they are candidate to be removed, there is no point to promote them.
>>>>
>>>> Hi Thomas,
>>>>
>>>> Yes, they are candidate for removal, as the new rte_swx_pipeline API evolves.
>>>>
>>>> But removing them requires updating the drivers/net/softnic code to use the new API, which is not going to be completed in time for release 21.11.
>>>>
>>>> So given this lag, it might be better to simply promote these functions to stable API now, as Ray suggests, instead of continuing to keep them experimental; then, once these functions are no longer used, then we can remove them, most likely in 22.11.
>>>>
>>>> So I will ack these patches, but I am willing to reconsider if you feel strongly against this approach.
>>>
>>> I think we should not promote API that we know will disappear soon.
>>> The stable status means something for the users.
>>> Ray, what is your opinion?
>>>
>>
>> Well - I agree with Cristian (he and I discuss this a few weeks ago).
>> My position is if you are going to maintain an API, that means giving a few guarantees.
>> The API's have been experimental for 3 years ... at what point do they mature?
>>
>> However, I agree there is two ways to look at this thing, I try to be pragmatic.
>> Maturing of any ABI/API is a conversation between a maintainer and the contributor.
>> If they strongly feel, it is a pointless exercise - I won't argue.
>
> I think you did't get it.
> This API will be removed soon.
> That's why I think it doesn't make sense to make them stable, just before removing.
>
Nope, I got it 110%
I reflected both my opinion as ABI Maintainer, and tried to be pragmatic about the situation.
As I said "Maturing of any ABI/API is a conversation between a maintainer and the contributor.
If they strongly feel, it is a pointless exercise - I won't argue."
Ray K
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] pipeline: remove experimental tag from API
2021-10-13 8:51 3% ` Kinsella, Ray
@ 2021-10-13 9:40 0% ` Thomas Monjalon
2021-10-13 9:43 4% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-13 9:40 UTC (permalink / raw)
To: Dumitrescu, Cristian, Kinsella, Ray
Cc: dev, Zhang, Roy Fan, Singh, Jasvinder, david.marchand
13/10/2021 10:51, Kinsella, Ray:
>
> On 12/10/2021 22:52, Thomas Monjalon wrote:
> > 12/10/2021 22:34, Dumitrescu, Cristian:
> >> From: Thomas Monjalon <thomas@monjalon.net>
> >>> 01/09/2021 14:20, Jasvinder Singh:
> >>>> These APIs were introduced in 18.05, therefore removing
> >>>> experimental tag to promote them to stable state.
> >>>>
> >>>> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> >>>> ---
> >>>> lib/pipeline/rte_port_in_action.h | 10 ----------
> >>>> lib/pipeline/rte_table_action.h | 18 ------------------
> >>>> lib/pipeline/version.map | 16 ++++++----------
> >>>> 3 files changed, 6 insertions(+), 38 deletions(-)
> >>>
> >>> Cristian, please can you check whether you intend to keep these functions in
> >>> future?
> >>> If they are candidate to be removed, there is no point to promote them.
> >>
> >> Hi Thomas,
> >>
> >> Yes, they are candidate for removal, as the new rte_swx_pipeline API evolves.
> >>
> >> But removing them requires updating the drivers/net/softnic code to use the new API, which is not going to be completed in time for release 21.11.
> >>
> >> So given this lag, it might be better to simply promote these functions to stable API now, as Ray suggests, instead of continuing to keep them experimental; then, once these functions are no longer used, then we can remove them, most likely in 22.11.
> >>
> >> So I will ack these patches, but I am willing to reconsider if you feel strongly against this approach.
> >
> > I think we should not promote API that we know will disappear soon.
> > The stable status means something for the users.
> > Ray, what is your opinion?
> >
>
> Well - I agree with Cristian (he and I discuss this a few weeks ago).
> My position is if you are going to maintain an API, that means giving a few guarantees.
> The API's have been experimental for 3 years ... at what point do they mature?
>
> However, I agree there is two ways to look at this thing, I try to be pragmatic.
> Maturing of any ABI/API is a conversation between a maintainer and the contributor.
> If they strongly feel, it is a pointless exercise - I won't argue.
I think you did't get it.
This API will be removed soon.
That's why I think it doesn't make sense to make them stable, just before removing.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] mempool: fix name size in mempool structure
@ 2021-10-13 8:57 11% Andrew Rybchenko
2021-10-13 11:07 4% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-10-13 8:57 UTC (permalink / raw)
To: Olivier Matz, Zoltan Kiss; +Cc: dev
Use correct define as a name array size.
The change breaks ABI and therefore cannot be backported to
stable branches.
Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data types")
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
lib/mempool/rte_mempool.h | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index f57ecbd6fc..04b14d7ae9 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -209,12 +209,7 @@ struct rte_mempool_info {
* The RTE mempool structure.
*/
struct rte_mempool {
- /*
- * Note: this field kept the RTE_MEMZONE_NAMESIZE size due to ABI
- * compatibility requirements, it could be changed to
- * RTE_MEMPOOL_NAMESIZE next time the ABI changes
- */
- char name[RTE_MEMZONE_NAMESIZE]; /**< Name of mempool. */
+ char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
RTE_STD_C11
union {
void *pool_data; /**< Ring or pool to store objects. */
--
2.30.2
^ permalink raw reply [relevance 11%]
* Re: [dpdk-dev] [PATCH] pipeline: remove experimental tag from API
@ 2021-10-13 8:51 3% ` Kinsella, Ray
2021-10-13 9:40 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-13 8:51 UTC (permalink / raw)
To: Thomas Monjalon, Dumitrescu, Cristian
Cc: dev, Zhang, Roy Fan, Singh, Jasvinder, david.marchand
On 12/10/2021 22:52, Thomas Monjalon wrote:
> 12/10/2021 22:34, Dumitrescu, Cristian:
>> From: Thomas Monjalon <thomas@monjalon.net>
>>> 01/09/2021 14:20, Jasvinder Singh:
>>>> These APIs were introduced in 18.05, therefore removing
>>>> experimental tag to promote them to stable state.
>>>>
>>>> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
>>>> ---
>>>> lib/pipeline/rte_port_in_action.h | 10 ----------
>>>> lib/pipeline/rte_table_action.h | 18 ------------------
>>>> lib/pipeline/version.map | 16 ++++++----------
>>>> 3 files changed, 6 insertions(+), 38 deletions(-)
>>>
>>> Cristian, please can you check whether you intend to keep these functions in
>>> future?
>>> If they are candidate to be removed, there is no point to promote them.
>>
>> Hi Thomas,
>>
>> Yes, they are candidate for removal, as the new rte_swx_pipeline API evolves.
>>
>> But removing them requires updating the drivers/net/softnic code to use the new API, which is not going to be completed in time for release 21.11.
>>
>> So given this lag, it might be better to simply promote these functions to stable API now, as Ray suggests, instead of continuing to keep them experimental; then, once these functions are no longer used, then we can remove them, most likely in 22.11.
>>
>> So I will ack these patches, but I am willing to reconsider if you feel strongly against this approach.
>
> I think we should not promote API that we know will disappear soon.
> The stable status means something for the users.
> Ray, what is your opinion?
>
Well - I agree with Cristian (he and I discuss this a few weeks ago).
My position is if you are going to maintain an API, that means giving a few guarantees.
The API's have been experimental for 3 years ... at what point do they mature?
However, I agree there is two ways to look at this thing, I try to be pragmatic.
Maturing of any ABI/API is a conversation between a maintainer and the contributor.
If they strongly feel, it is a pointless exercise - I won't argue.
Ray K
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v12 00/12] Packet capture framework update
2021-10-12 18:22 0% ` Thomas Monjalon
@ 2021-10-13 8:44 0% ` Pattan, Reshma
0 siblings, 0 replies; 200+ results
From: Pattan, Reshma @ 2021-10-13 8:44 UTC (permalink / raw)
To: Thomas Monjalon, Stephen Hemminger; +Cc: dev, Richardson, Bruce, david.marchand
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> > Thought they were, look like Reshma got missed.
> >
> > No worries about doing in 22.02 since there is no API/ABI breakage in
> > the patchset. No pre-release note needed either.
>
> We can still merge it for 21.11 if Reshma is OK with the v12.
>
FYI, I have reviewed and acked the below pdump library patch.
[v12,06/12] pdump: support pcapng and filtering
There are some other patches , mainly new library librte_pcapng which needs a review and an Ack, can someone join to review and provide an Ack?
I will also take a look.
Thanks,
Reshma
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-13 7:04 0% ` Anoob Joseph
@ 2021-10-13 8:39 3% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-10-13 8:39 UTC (permalink / raw)
To: Anoob Joseph, Thomas Monjalon, Akhil Goyal, dev
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh, bruce.richardson
On 13/10/2021 08:04, Anoob Joseph wrote:
> Hi Akhil, Ray, Thomas,
>
> Please see inline.
>
> Thanks,
> Anoob
>
>> -----Original Message-----
>> From: Thomas Monjalon <thomas@monjalon.net>
>> Sent: Wednesday, October 13, 2021 12:32 PM
>> To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org; Kinsella, Ray
>> <mdr@ashroe.eu>; Anoob Joseph <anoobj@marvell.com>
>> Cc: david.marchand@redhat.com; hemant.agrawal@nxp.com;
>> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
>> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
>> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
>> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
>> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj
>> Rottela <rnagadheeraj@marvell.com>; Ankur Dwivedi
>> <adwivedi@marvell.com>; ciara.power@intel.com; Stephen Hemminger
>> <stephen@networkplumber.org>; Yigit, Ferruh <ferruh.yigit@intel.com>;
>> bruce.richardson@intel.com
>> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove
>> LIST_END enumerators
>>
>> 13/10/2021 07:36, Anoob Joseph:
>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>> 12/10/2021 16:47, Kinsella, Ray:
>>>>> On 12/10/2021 15:18, Anoob Joseph wrote:
>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>>>> 12/10/2021 15:38, Anoob Joseph:
>>>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>>>>>> 12/10/2021 13:34, Anoob Joseph:
>>>>>>>>>> From: Kinsella, Ray <mdr@ashroe.eu>
>>>>>>>>>>> On 12/10/2021 11:50, Anoob Joseph wrote:
>>>>>>>>>>>> From: Akhil Goyal <gakhil@marvell.com>
>>>>>>>>>>>>>> On 08/10/2021 21:45, Akhil Goyal wrote:
>>>>>>>>>>>>>>> Remove *_LIST_END enumerators from asymmetric
>> crypto
>>>> lib to
>>>>>>>>>>>>>>> avoid ABI breakage for every new addition in enums.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>> - } else if (xform->xform_type >=
>>>>>>>>>>>>>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
>>>>>>>>>>>>>>> + } else if (xform->xform_type >
>>>>>>>>> RTE_CRYPTO_ASYM_XFORM_ECPM
>>>>>>>>> [...]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So I am not sure that this is an improvement.
>>>>>>>>>
>>>>>>>>> Indeed, it is not an improvement.
>>>>>>>>>
>>>>>>>>>>>>>> The cryptodev issue we had, was that _LIST_END was being
>>>>>>>>>>>>>> used to size arrays.
>>>>>>>>>>>>>> And that broke when new algorithms got added. Is that an
>>>>>>>>>>>>>> issue, in this
>>>>>>>>>>> case?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Yes we did this same exercise for symmetric crypto enums
>>>> earlier.
>>>>>>>>>>>>> Asym enums were left as it was experimental at that point.
>>>>>>>>>>>>> They are still experimental, but thought of making this
>>>>>>>>>>>>> uniform throughout DPDK enums.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am not sure that swapping out _LIST_END, and then
>>>>>>>>>>>>>> littering the code with RTE_CRYPTO_ASYM_XFORM_ECPM
>> and
>>>>>>>>>>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
>>>>>>>>> improvement
>>>>>>>>>>>>> here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> My 2c is that from an ABI PoV
>>>> RTE_CRYPTO_ASYM_OP_LIST_END is
>>>>>>>>>>>>>> not better or worse, than
>>>>>>>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Interested to hear other thoughts.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I don’t have any better solution for avoiding ABI issues for
>> now.
>>>>>>>>>>>>> The change is for avoiding ABI breakage. But we can drop
>>>>>>>>>>>>> this patch For now as asym is still experimental.
>>>>>>>>>>>>
>>>>>>>>>>>> [Anoob] Having LIST_END would preclude new additions to
>>>>>>>>>>>> asymmetric
>>>>>>>>> algos?
>>>>>>>>>>> If yes, then I would suggest we address it now.
>>>>>>>>>>>
>>>>>>>>>>> Not at all - but it can be problematic, if two versions of
>>>>>>>>>>> DPDK disagree with the value of LIST_END.
>>>>>>>>>>>
>>>>>>>>>>>> Looking at the "problematic changes", we only have 2-3
>>>>>>>>>>>> application & PMD changes. For unit test application, we
>>>>>>>>>>>> could may be do something like,
>>>>>>>>>>>
>>>>>>>>>>> The essental functionality not that different, I am just
>>>>>>>>>>> not sure that the verbosity below is helping.
>>>>>>>>>>> What you are really trying to guard against is people using
>>>>>>>>>>> LIST_END to size arrays.
>>>>>>>>>>
>>>>>>>>>> [Anoob] Our problem is application using LIST_END (which
>>>>>>>>>> comes from library)
>>>>>>>>> to determine the number of iterations for the loop. My
>>>>>>>>> suggestion is to modify the UT such that, we could use
>>>>>>>>> RTE_DIM(types) (which comes from application) to determine
>>>>>>>>> iterations of loop. This would solve the
>>>>>>> problem, right?
>>>>>>>>>
>>>>>>>>> The problem is not the application.
>>>>>>>>> Are you asking the app to define DPDK types?
>>>>>>>>
>>>>>>>> [Anoob] I didn't understand how you concluded that.
>>>>>>>
>>>>>>> Because you define a specific array in the test app.
>>>>>>>
>>>>>>>> The app is supposed to test "n" asymmetric features supported
>>>>>>>> by
>>>> DPDK.
>>>>>>> Currently, it does that by looping from 0 to LIST_END which
>>>>>>> happens to give you the first n features. Now, if we add any
>>>>>>> new asymmetric feature, LIST_END value would change. Isn't that
>>>>>>> the very reason why we removed LIST_END from symmetric library
>> and applications?
>>>>>>>
>>>>>>> Yes
>>>>>>>
>>>>>>>> Now coming to what I proposed, the app is supposed to test "n"
>>>>>>>> asymmetric
>>>>>>> features. LIST_END helps in doing the loops. If we remove
>>>>>>> LIST_END, then application will not be in a position to do a
>>>>>>> loop. My suggestion is, we list the types that are supposed to
>>>>>>> be tested by the app, and let that array be used as feature list.
>>>>>>>>
>>>>>>>> PS: Just to reiterate, my proposal is just a local array which
>>>>>>>> would hold DPDK
>>>>>>> defined RTE enum values for the features that would be tested
>>>>>>> by this app/function.
>>>>>>>
>>>>>>> I am more concerned by the general case than the test app.
>>>>>>> I think a function returning a number is more app-friendly.
>>>>>>
>>>>>> [Anoob] Indeed. But there are 3 LIST_ENDs removed with this
>>>>>> patch. Do
>>>> you propose 3 new APIs to just get max number?
>>>>>
>>>>> 1 API returning a single "info" structure perhaps - as being the
>>>>> most
>>>> extensible?
>>>>
>>>> Or 3 iterators (foreach construct).
>>>> Instead of just returning a size, we can have an iterator for each
>>>> enum which needs to be iterated.
>>>
>>> [Anoob] Something like this?
>>>
>>> diff --git a/app/test/test_cryptodev_asym.c
>>> b/app/test/test_cryptodev_asym.c index 847b074a4f..68a6197851 100644
>>> --- a/app/test/test_cryptodev_asym.c
>>> +++ b/app/test/test_cryptodev_asym.c
>>> @@ -542,7 +542,7 @@ test_one_case(const void *test_case, int
>> sessionless)
>>> printf(" %u) TestCase %s %s\n", test_index++,
>>> tc.modex.description, test_msg);
>>> } else {
>>> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
>>> + RTE_CRYPTO_ASYM_FOREACH_OP_TYPE(i) {
>>> if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA)
>> {
>>> if (tc.rsa_data.op_type_flags & (1 << i)) {
>>> if (tc.rsa_data.key_exp) {
>>> diff --git a/lib/cryptodev/rte_crypto_asym.h
>>> b/lib/cryptodev/rte_crypto_asym.h index 9c866f553f..5627dcaff1 100644
>>> --- a/lib/cryptodev/rte_crypto_asym.h
>>> +++ b/lib/cryptodev/rte_crypto_asym.h
>>> @@ -119,6 +119,11 @@ enum rte_crypto_asym_op_type {
>>> RTE_CRYPTO_ASYM_OP_LIST_END
>>> };
>>>
>>> +#define RTE_CRYPTO_ASYM_FOREACH_OP_TYPE(i) \
>>> + for (i = RTE_CRYPTO_ASYM_OP_ENCRYPT; \
>>> + i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; \
>>> + i++)
>>
>> You must not use enum values in the .h, otherwise ABI compatibility is not
>> ensured.
>> Yes you can do a macro, but it must call functions, not using direct values.
>>
>
> [Anoob] Understood. Will do that.
>
> @Ray, @Akhil, you are also in agreement, right?
>
Yes - whether you use the MACRO or not less important.
In order to maintain the ABI ... you need to learn the array size through an API.
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 1/1] ci: enable DPDK GHA for arm64 with self-hosted runners
@ 2021-10-13 8:03 7% ` Serena He
2021-10-13 11:32 0% ` Michael Santana
0 siblings, 1 reply; 200+ results
From: Serena He @ 2021-10-13 8:03 UTC (permalink / raw)
To: aconole, maicolgabriel, david.marchand, thomas
Cc: dev, nd, honnappa.nagarahalli, ruifeng.wang, Dean.Arnold,
Serena He, stable
CI jobs are triggered only for repos installed with given GHApp and runners
Cc: stable@dpdk.org
Signed-off-by: Serena He <serena.he@arm.com>
---
.github/workflows/build-arm64.yml | 118 ++++++++++++++++++++++++++++++
1 file changed, 118 insertions(+)
create mode 100644 .github/workflows/build-arm64.yml
diff --git a/.github/workflows/build-arm64.yml b/.github/workflows/build-arm64.yml
new file mode 100644
index 0000000000..570563f7c8
--- /dev/null
+++ b/.github/workflows/build-arm64.yml
@@ -0,0 +1,118 @@
+name: build-arm64
+
+on:
+ push:
+ schedule:
+ - cron: '0 0 * * 1'
+
+defaults:
+ run:
+ shell: bash --noprofile --norc -exo pipefail {0}
+
+jobs:
+ build:
+ # Here, runners for arm64 are accessed by installed GitHub APP, thus will not be available by fork.
+ # you can change the following 'if' and 'runs-on' if you have your own runners installed.
+ # or request to get your repo on the whitelist to use GitHub APP and delete this 'if'.
+ if: ${{ github.repository == 'DPDK/dpdk' || github.repository == 'ovsrobot/dpdk' }}
+ name: ${{ join(matrix.config.*, '-') }}
+ runs-on: ${{ matrix.config.os }}
+ env:
+ ABI_CHECKS: ${{ contains(matrix.config.checks, 'abi') }}
+ BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
+ CL: ${{ matrix.config.compiler == 'clang' }}
+ CC: ccache ${{ matrix.config.compiler }}
+ DEF_LIB: ${{ matrix.config.library }}
+ LIBABIGAIL_VERSION: libabigail-1.8
+ REF_GIT_TAG: none
+
+ strategy:
+ fail-fast: false
+ matrix:
+ config:
+ - os: [self-hosted,arm-ubuntu-20.04]
+ compiler: gcc
+ library: static
+ - os: [self-hosted,arm-ubuntu-20.04]
+ compiler: gcc
+ library: shared
+ checks: doc+tests
+ - os: [self-hosted,arm-ubuntu-20.04]
+ compiler: clang
+ library: static
+ - os: [self-hosted,arm-ubuntu-20.04]
+ compiler: clang
+ library: shared
+ checks: doc+tests
+
+ steps:
+ - name: Checkout sources
+ uses: actions/checkout@v2
+ - name: Generate cache keys
+ id: get_ref_keys
+ run: |
+ echo -n '::set-output name=ccache::'
+ echo 'ccache-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-'$(date -u +%Y-w%W)
+ echo -n '::set-output name=libabigail::'
+ echo 'libabigail-${{ matrix.config.os }}'
+ echo -n '::set-output name=abi::'
+ echo 'abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.LIBABIGAIL_VERSION }}-${{ env.REF_GIT_TAG }}'
+ - name: Retrieve ccache cache
+ uses: actions/cache@v2
+ with:
+ path: ~/.ccache
+ key: ${{ steps.get_ref_keys.outputs.ccache }}-${{ github.ref }}
+ restore-keys: |
+ ${{ steps.get_ref_keys.outputs.ccache }}-refs/heads/main
+ - name: Retrieve libabigail cache
+ id: libabigail-cache
+ uses: actions/cache@v2
+ if: env.ABI_CHECKS == 'true'
+ with:
+ path: libabigail
+ key: ${{ steps.get_ref_keys.outputs.libabigail }}
+ - name: Retrieve ABI reference cache
+ uses: actions/cache@v2
+ if: env.ABI_CHECKS == 'true'
+ with:
+ path: reference
+ key: ${{ steps.get_ref_keys.outputs.abi }}
+ - name: Update APT cache
+ run: sudo apt update || true
+ - name: Install packages
+ run: sudo apt install -y ccache libnuma-dev python3-setuptools
+ python3-wheel python3-pip python3-pyelftools ninja-build libbsd-dev
+ libpcap-dev libibverbs-dev libcrypto++-dev libfdt-dev libjansson-dev
+ libarchive-dev zlib1g-dev pkgconf
+ - name: Install libabigail build dependencies if no cache is available
+ if: env.ABI_CHECKS == 'true' && steps.libabigail-cache.outputs.cache-hit != 'true'
+ run: sudo apt install -y autoconf automake libtool pkg-config libxml2-dev
+ libdw-dev
+
+ - name: Install test tools packages
+ run: sudo apt install -y gdb
+ - name: Install doc generation packages
+ if: env.BUILD_DOCS == 'true'
+ run: sudo apt install -y doxygen graphviz python3-sphinx
+ python3-sphinx-rtd-theme
+ - name: Run setup
+ run: |
+ .ci/linux-setup.sh
+ # Workaround on $HOME permissions as EAL checks them for plugin loading
+ chmod o-w $HOME
+ - name: Install clang
+ if: env.CL == 'true'
+ run: sudo apt install -y clang
+ - name: Build and test
+ run: .ci/linux-build.sh
+ - name: Upload logs on failure
+ if: failure()
+ uses: actions/upload-artifact@v2
+ with:
+ name: meson-logs-${{ join(matrix.config.*, '-') }}
+ path: |
+ build/meson-logs/testlog.txt
+ build/.ninja_log
+ build/meson-logs/meson-log.txt
+ build/gdb.log
+
--
2.17.1
^ permalink raw reply [relevance 7%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-13 7:02 3% ` Thomas Monjalon
@ 2021-10-13 7:04 0% ` Anoob Joseph
2021-10-13 8:39 3% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-10-13 7:04 UTC (permalink / raw)
To: Thomas Monjalon, Akhil Goyal, dev, Kinsella, Ray
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh, bruce.richardson
Hi Akhil, Ray, Thomas,
Please see inline.
Thanks,
Anoob
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, October 13, 2021 12:32 PM
> To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org; Kinsella, Ray
> <mdr@ashroe.eu>; Anoob Joseph <anoobj@marvell.com>
> Cc: david.marchand@redhat.com; hemant.agrawal@nxp.com;
> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj
> Rottela <rnagadheeraj@marvell.com>; Ankur Dwivedi
> <adwivedi@marvell.com>; ciara.power@intel.com; Stephen Hemminger
> <stephen@networkplumber.org>; Yigit, Ferruh <ferruh.yigit@intel.com>;
> bruce.richardson@intel.com
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove
> LIST_END enumerators
>
> 13/10/2021 07:36, Anoob Joseph:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 12/10/2021 16:47, Kinsella, Ray:
> > > > On 12/10/2021 15:18, Anoob Joseph wrote:
> > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > >> 12/10/2021 15:38, Anoob Joseph:
> > > > >>> From: Thomas Monjalon <thomas@monjalon.net>
> > > > >>>> 12/10/2021 13:34, Anoob Joseph:
> > > > >>>>> From: Kinsella, Ray <mdr@ashroe.eu>
> > > > >>>>>> On 12/10/2021 11:50, Anoob Joseph wrote:
> > > > >>>>>>> From: Akhil Goyal <gakhil@marvell.com>
> > > > >>>>>>>>> On 08/10/2021 21:45, Akhil Goyal wrote:
> > > > >>>>>>>>>> Remove *_LIST_END enumerators from asymmetric
> crypto
> > > lib to
> > > > >>>>>>>>>> avoid ABI breakage for every new addition in enums.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > > >>>>>>>>>> ---
> > > > >>>>>>>>>> - } else if (xform->xform_type >=
> > > > >>>>>>>>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > > > >>>>>>>>>> + } else if (xform->xform_type >
> > > > >>>> RTE_CRYPTO_ASYM_XFORM_ECPM
> > > > >>>> [...]
> > > > >>>>>>>>>
> > > > >>>>>>>>> So I am not sure that this is an improvement.
> > > > >>>>
> > > > >>>> Indeed, it is not an improvement.
> > > > >>>>
> > > > >>>>>>>>> The cryptodev issue we had, was that _LIST_END was being
> > > > >>>>>>>>> used to size arrays.
> > > > >>>>>>>>> And that broke when new algorithms got added. Is that an
> > > > >>>>>>>>> issue, in this
> > > > >>>>>> case?
> > > > >>>>>>>>
> > > > >>>>>>>> Yes we did this same exercise for symmetric crypto enums
> > > earlier.
> > > > >>>>>>>> Asym enums were left as it was experimental at that point.
> > > > >>>>>>>> They are still experimental, but thought of making this
> > > > >>>>>>>> uniform throughout DPDK enums.
> > > > >>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>> I am not sure that swapping out _LIST_END, and then
> > > > >>>>>>>>> littering the code with RTE_CRYPTO_ASYM_XFORM_ECPM
> and
> > > > >>>>>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
> > > > >>>> improvement
> > > > >>>>>>>> here.
> > > > >>>>>>>>>
> > > > >>>>>>>>> My 2c is that from an ABI PoV
> > > RTE_CRYPTO_ASYM_OP_LIST_END is
> > > > >>>>>>>>> not better or worse, than
> > > > >>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> > > > >>>>>>>>>
> > > > >>>>>>>>> Interested to hear other thoughts.
> > > > >>>>>>>>
> > > > >>>>>>>> I don’t have any better solution for avoiding ABI issues for
> now.
> > > > >>>>>>>> The change is for avoiding ABI breakage. But we can drop
> > > > >>>>>>>> this patch For now as asym is still experimental.
> > > > >>>>>>>
> > > > >>>>>>> [Anoob] Having LIST_END would preclude new additions to
> > > > >>>>>>> asymmetric
> > > > >>>> algos?
> > > > >>>>>> If yes, then I would suggest we address it now.
> > > > >>>>>>
> > > > >>>>>> Not at all - but it can be problematic, if two versions of
> > > > >>>>>> DPDK disagree with the value of LIST_END.
> > > > >>>>>>
> > > > >>>>>>> Looking at the "problematic changes", we only have 2-3
> > > > >>>>>>> application & PMD changes. For unit test application, we
> > > > >>>>>>> could may be do something like,
> > > > >>>>>>
> > > > >>>>>> The essental functionality not that different, I am just
> > > > >>>>>> not sure that the verbosity below is helping.
> > > > >>>>>> What you are really trying to guard against is people using
> > > > >>>>>> LIST_END to size arrays.
> > > > >>>>>
> > > > >>>>> [Anoob] Our problem is application using LIST_END (which
> > > > >>>>> comes from library)
> > > > >>>> to determine the number of iterations for the loop. My
> > > > >>>> suggestion is to modify the UT such that, we could use
> > > > >>>> RTE_DIM(types) (which comes from application) to determine
> > > > >>>> iterations of loop. This would solve the
> > > > >> problem, right?
> > > > >>>>
> > > > >>>> The problem is not the application.
> > > > >>>> Are you asking the app to define DPDK types?
> > > > >>>
> > > > >>> [Anoob] I didn't understand how you concluded that.
> > > > >>
> > > > >> Because you define a specific array in the test app.
> > > > >>
> > > > >>> The app is supposed to test "n" asymmetric features supported
> > > > >>> by
> > > DPDK.
> > > > >> Currently, it does that by looping from 0 to LIST_END which
> > > > >> happens to give you the first n features. Now, if we add any
> > > > >> new asymmetric feature, LIST_END value would change. Isn't that
> > > > >> the very reason why we removed LIST_END from symmetric library
> and applications?
> > > > >>
> > > > >> Yes
> > > > >>
> > > > >>> Now coming to what I proposed, the app is supposed to test "n"
> > > > >>> asymmetric
> > > > >> features. LIST_END helps in doing the loops. If we remove
> > > > >> LIST_END, then application will not be in a position to do a
> > > > >> loop. My suggestion is, we list the types that are supposed to
> > > > >> be tested by the app, and let that array be used as feature list.
> > > > >>>
> > > > >>> PS: Just to reiterate, my proposal is just a local array which
> > > > >>> would hold DPDK
> > > > >> defined RTE enum values for the features that would be tested
> > > > >> by this app/function.
> > > > >>
> > > > >> I am more concerned by the general case than the test app.
> > > > >> I think a function returning a number is more app-friendly.
> > > > >
> > > > > [Anoob] Indeed. But there are 3 LIST_ENDs removed with this
> > > > > patch. Do
> > > you propose 3 new APIs to just get max number?
> > > >
> > > > 1 API returning a single "info" structure perhaps - as being the
> > > > most
> > > extensible?
> > >
> > > Or 3 iterators (foreach construct).
> > > Instead of just returning a size, we can have an iterator for each
> > > enum which needs to be iterated.
> >
> > [Anoob] Something like this?
> >
> > diff --git a/app/test/test_cryptodev_asym.c
> > b/app/test/test_cryptodev_asym.c index 847b074a4f..68a6197851 100644
> > --- a/app/test/test_cryptodev_asym.c
> > +++ b/app/test/test_cryptodev_asym.c
> > @@ -542,7 +542,7 @@ test_one_case(const void *test_case, int
> sessionless)
> > printf(" %u) TestCase %s %s\n", test_index++,
> > tc.modex.description, test_msg);
> > } else {
> > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > + RTE_CRYPTO_ASYM_FOREACH_OP_TYPE(i) {
> > if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA)
> {
> > if (tc.rsa_data.op_type_flags & (1 << i)) {
> > if (tc.rsa_data.key_exp) {
> > diff --git a/lib/cryptodev/rte_crypto_asym.h
> > b/lib/cryptodev/rte_crypto_asym.h index 9c866f553f..5627dcaff1 100644
> > --- a/lib/cryptodev/rte_crypto_asym.h
> > +++ b/lib/cryptodev/rte_crypto_asym.h
> > @@ -119,6 +119,11 @@ enum rte_crypto_asym_op_type {
> > RTE_CRYPTO_ASYM_OP_LIST_END
> > };
> >
> > +#define RTE_CRYPTO_ASYM_FOREACH_OP_TYPE(i) \
> > + for (i = RTE_CRYPTO_ASYM_OP_ENCRYPT; \
> > + i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; \
> > + i++)
>
> You must not use enum values in the .h, otherwise ABI compatibility is not
> ensured.
> Yes you can do a macro, but it must call functions, not using direct values.
>
[Anoob] Understood. Will do that.
@Ray, @Akhil, you are also in agreement, right?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-13 5:36 0% ` Anoob Joseph
@ 2021-10-13 7:02 3% ` Thomas Monjalon
2021-10-13 7:04 0% ` Anoob Joseph
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-13 7:02 UTC (permalink / raw)
To: Akhil Goyal, dev, Kinsella, Ray, Anoob Joseph
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh, bruce.richardson
13/10/2021 07:36, Anoob Joseph:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 12/10/2021 16:47, Kinsella, Ray:
> > > On 12/10/2021 15:18, Anoob Joseph wrote:
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > >> 12/10/2021 15:38, Anoob Joseph:
> > > >>> From: Thomas Monjalon <thomas@monjalon.net>
> > > >>>> 12/10/2021 13:34, Anoob Joseph:
> > > >>>>> From: Kinsella, Ray <mdr@ashroe.eu>
> > > >>>>>> On 12/10/2021 11:50, Anoob Joseph wrote:
> > > >>>>>>> From: Akhil Goyal <gakhil@marvell.com>
> > > >>>>>>>>> On 08/10/2021 21:45, Akhil Goyal wrote:
> > > >>>>>>>>>> Remove *_LIST_END enumerators from asymmetric crypto
> > lib to
> > > >>>>>>>>>> avoid ABI breakage for every new addition in enums.
> > > >>>>>>>>>>
> > > >>>>>>>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > >>>>>>>>>> ---
> > > >>>>>>>>>> - } else if (xform->xform_type >=
> > > >>>>>>>>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > > >>>>>>>>>> + } else if (xform->xform_type >
> > > >>>> RTE_CRYPTO_ASYM_XFORM_ECPM
> > > >>>> [...]
> > > >>>>>>>>>
> > > >>>>>>>>> So I am not sure that this is an improvement.
> > > >>>>
> > > >>>> Indeed, it is not an improvement.
> > > >>>>
> > > >>>>>>>>> The cryptodev issue we had, was that _LIST_END was being
> > > >>>>>>>>> used to size arrays.
> > > >>>>>>>>> And that broke when new algorithms got added. Is that an
> > > >>>>>>>>> issue, in this
> > > >>>>>> case?
> > > >>>>>>>>
> > > >>>>>>>> Yes we did this same exercise for symmetric crypto enums
> > earlier.
> > > >>>>>>>> Asym enums were left as it was experimental at that point.
> > > >>>>>>>> They are still experimental, but thought of making this
> > > >>>>>>>> uniform throughout DPDK enums.
> > > >>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> I am not sure that swapping out _LIST_END, and then
> > > >>>>>>>>> littering the code with RTE_CRYPTO_ASYM_XFORM_ECPM and
> > > >>>>>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
> > > >>>> improvement
> > > >>>>>>>> here.
> > > >>>>>>>>>
> > > >>>>>>>>> My 2c is that from an ABI PoV
> > RTE_CRYPTO_ASYM_OP_LIST_END is
> > > >>>>>>>>> not better or worse, than
> > > >>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> > > >>>>>>>>>
> > > >>>>>>>>> Interested to hear other thoughts.
> > > >>>>>>>>
> > > >>>>>>>> I don’t have any better solution for avoiding ABI issues for now.
> > > >>>>>>>> The change is for avoiding ABI breakage. But we can drop this
> > > >>>>>>>> patch For now as asym is still experimental.
> > > >>>>>>>
> > > >>>>>>> [Anoob] Having LIST_END would preclude new additions to
> > > >>>>>>> asymmetric
> > > >>>> algos?
> > > >>>>>> If yes, then I would suggest we address it now.
> > > >>>>>>
> > > >>>>>> Not at all - but it can be problematic, if two versions of DPDK
> > > >>>>>> disagree with the value of LIST_END.
> > > >>>>>>
> > > >>>>>>> Looking at the "problematic changes", we only have 2-3
> > > >>>>>>> application & PMD changes. For unit test application, we could
> > > >>>>>>> may be do something like,
> > > >>>>>>
> > > >>>>>> The essental functionality not that different, I am just not
> > > >>>>>> sure that the verbosity below is helping.
> > > >>>>>> What you are really trying to guard against is people using
> > > >>>>>> LIST_END to size arrays.
> > > >>>>>
> > > >>>>> [Anoob] Our problem is application using LIST_END (which comes
> > > >>>>> from library)
> > > >>>> to determine the number of iterations for the loop. My suggestion
> > > >>>> is to modify the UT such that, we could use RTE_DIM(types) (which
> > > >>>> comes from application) to determine iterations of loop. This
> > > >>>> would solve the
> > > >> problem, right?
> > > >>>>
> > > >>>> The problem is not the application.
> > > >>>> Are you asking the app to define DPDK types?
> > > >>>
> > > >>> [Anoob] I didn't understand how you concluded that.
> > > >>
> > > >> Because you define a specific array in the test app.
> > > >>
> > > >>> The app is supposed to test "n" asymmetric features supported by
> > DPDK.
> > > >> Currently, it does that by looping from 0 to LIST_END which happens
> > > >> to give you the first n features. Now, if we add any new asymmetric
> > > >> feature, LIST_END value would change. Isn't that the very reason
> > > >> why we removed LIST_END from symmetric library and applications?
> > > >>
> > > >> Yes
> > > >>
> > > >>> Now coming to what I proposed, the app is supposed to test "n"
> > > >>> asymmetric
> > > >> features. LIST_END helps in doing the loops. If we remove LIST_END,
> > > >> then application will not be in a position to do a loop. My
> > > >> suggestion is, we list the types that are supposed to be tested by
> > > >> the app, and let that array be used as feature list.
> > > >>>
> > > >>> PS: Just to reiterate, my proposal is just a local array which
> > > >>> would hold DPDK
> > > >> defined RTE enum values for the features that would be tested by
> > > >> this app/function.
> > > >>
> > > >> I am more concerned by the general case than the test app.
> > > >> I think a function returning a number is more app-friendly.
> > > >
> > > > [Anoob] Indeed. But there are 3 LIST_ENDs removed with this patch. Do
> > you propose 3 new APIs to just get max number?
> > >
> > > 1 API returning a single "info" structure perhaps - as being the most
> > extensible?
> >
> > Or 3 iterators (foreach construct).
> > Instead of just returning a size, we can have an iterator for each enum which
> > needs to be iterated.
>
> [Anoob] Something like this?
>
> diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
> index 847b074a4f..68a6197851 100644
> --- a/app/test/test_cryptodev_asym.c
> +++ b/app/test/test_cryptodev_asym.c
> @@ -542,7 +542,7 @@ test_one_case(const void *test_case, int sessionless)
> printf(" %u) TestCase %s %s\n", test_index++,
> tc.modex.description, test_msg);
> } else {
> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> + RTE_CRYPTO_ASYM_FOREACH_OP_TYPE(i) {
> if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
> if (tc.rsa_data.op_type_flags & (1 << i)) {
> if (tc.rsa_data.key_exp) {
> diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
> index 9c866f553f..5627dcaff1 100644
> --- a/lib/cryptodev/rte_crypto_asym.h
> +++ b/lib/cryptodev/rte_crypto_asym.h
> @@ -119,6 +119,11 @@ enum rte_crypto_asym_op_type {
> RTE_CRYPTO_ASYM_OP_LIST_END
> };
>
> +#define RTE_CRYPTO_ASYM_FOREACH_OP_TYPE(i) \
> + for (i = RTE_CRYPTO_ASYM_OP_ENCRYPT; \
> + i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; \
> + i++)
You must not use enum values in the .h, otherwise ABI compatibility is not ensured.
Yes you can do a macro, but it must call functions, not using direct values.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 15:06 0% ` Thomas Monjalon
@ 2021-10-13 5:36 0% ` Anoob Joseph
2021-10-13 7:02 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-10-13 5:36 UTC (permalink / raw)
To: Thomas Monjalon, Akhil Goyal, dev, Kinsella, Ray
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh, bruce.richardson
Hi Thomas, Ray,
Please see inline.
Thanks,
Anoob
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, October 12, 2021 8:37 PM
> To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal
> <gakhil@marvell.com>; dev@dpdk.org; Kinsella, Ray <mdr@ashroe.eu>
> Cc: david.marchand@redhat.com; hemant.agrawal@nxp.com;
> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj
> Rottela <rnagadheeraj@marvell.com>; Ankur Dwivedi
> <adwivedi@marvell.com>; ciara.power@intel.com; Stephen Hemminger
> <stephen@networkplumber.org>; Yigit, Ferruh <ferruh.yigit@intel.com>;
> bruce.richardson@intel.com
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove
> LIST_END enumerators
>
> 12/10/2021 16:47, Kinsella, Ray:
> > On 12/10/2021 15:18, Anoob Joseph wrote:
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > >> 12/10/2021 15:38, Anoob Joseph:
> > >>> From: Thomas Monjalon <thomas@monjalon.net>
> > >>>> 12/10/2021 13:34, Anoob Joseph:
> > >>>>> From: Kinsella, Ray <mdr@ashroe.eu>
> > >>>>>> On 12/10/2021 11:50, Anoob Joseph wrote:
> > >>>>>>> From: Akhil Goyal <gakhil@marvell.com>
> > >>>>>>>>> On 08/10/2021 21:45, Akhil Goyal wrote:
> > >>>>>>>>>> Remove *_LIST_END enumerators from asymmetric crypto
> lib to
> > >>>>>>>>>> avoid ABI breakage for every new addition in enums.
> > >>>>>>>>>>
> > >>>>>>>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > >>>>>>>>>> ---
> > >>>>>>>>>> - } else if (xform->xform_type >=
> > >>>>>>>>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > >>>>>>>>>> + } else if (xform->xform_type >
> > >>>> RTE_CRYPTO_ASYM_XFORM_ECPM
> > >>>> [...]
> > >>>>>>>>>
> > >>>>>>>>> So I am not sure that this is an improvement.
> > >>>>
> > >>>> Indeed, it is not an improvement.
> > >>>>
> > >>>>>>>>> The cryptodev issue we had, was that _LIST_END was being
> > >>>>>>>>> used to size arrays.
> > >>>>>>>>> And that broke when new algorithms got added. Is that an
> > >>>>>>>>> issue, in this
> > >>>>>> case?
> > >>>>>>>>
> > >>>>>>>> Yes we did this same exercise for symmetric crypto enums
> earlier.
> > >>>>>>>> Asym enums were left as it was experimental at that point.
> > >>>>>>>> They are still experimental, but thought of making this
> > >>>>>>>> uniform throughout DPDK enums.
> > >>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> I am not sure that swapping out _LIST_END, and then
> > >>>>>>>>> littering the code with RTE_CRYPTO_ASYM_XFORM_ECPM and
> > >>>>>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
> > >>>> improvement
> > >>>>>>>> here.
> > >>>>>>>>>
> > >>>>>>>>> My 2c is that from an ABI PoV
> RTE_CRYPTO_ASYM_OP_LIST_END is
> > >>>>>>>>> not better or worse, than
> > >>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> > >>>>>>>>>
> > >>>>>>>>> Interested to hear other thoughts.
> > >>>>>>>>
> > >>>>>>>> I don’t have any better solution for avoiding ABI issues for now.
> > >>>>>>>> The change is for avoiding ABI breakage. But we can drop this
> > >>>>>>>> patch For now as asym is still experimental.
> > >>>>>>>
> > >>>>>>> [Anoob] Having LIST_END would preclude new additions to
> > >>>>>>> asymmetric
> > >>>> algos?
> > >>>>>> If yes, then I would suggest we address it now.
> > >>>>>>
> > >>>>>> Not at all - but it can be problematic, if two versions of DPDK
> > >>>>>> disagree with the value of LIST_END.
> > >>>>>>
> > >>>>>>> Looking at the "problematic changes", we only have 2-3
> > >>>>>>> application & PMD changes. For unit test application, we could
> > >>>>>>> may be do something like,
> > >>>>>>
> > >>>>>> The essental functionality not that different, I am just not
> > >>>>>> sure that the verbosity below is helping.
> > >>>>>> What you are really trying to guard against is people using
> > >>>>>> LIST_END to size arrays.
> > >>>>>
> > >>>>> [Anoob] Our problem is application using LIST_END (which comes
> > >>>>> from library)
> > >>>> to determine the number of iterations for the loop. My suggestion
> > >>>> is to modify the UT such that, we could use RTE_DIM(types) (which
> > >>>> comes from application) to determine iterations of loop. This
> > >>>> would solve the
> > >> problem, right?
> > >>>>
> > >>>> The problem is not the application.
> > >>>> Are you asking the app to define DPDK types?
> > >>>
> > >>> [Anoob] I didn't understand how you concluded that.
> > >>
> > >> Because you define a specific array in the test app.
> > >>
> > >>> The app is supposed to test "n" asymmetric features supported by
> DPDK.
> > >> Currently, it does that by looping from 0 to LIST_END which happens
> > >> to give you the first n features. Now, if we add any new asymmetric
> > >> feature, LIST_END value would change. Isn't that the very reason
> > >> why we removed LIST_END from symmetric library and applications?
> > >>
> > >> Yes
> > >>
> > >>> Now coming to what I proposed, the app is supposed to test "n"
> > >>> asymmetric
> > >> features. LIST_END helps in doing the loops. If we remove LIST_END,
> > >> then application will not be in a position to do a loop. My
> > >> suggestion is, we list the types that are supposed to be tested by
> > >> the app, and let that array be used as feature list.
> > >>>
> > >>> PS: Just to reiterate, my proposal is just a local array which
> > >>> would hold DPDK
> > >> defined RTE enum values for the features that would be tested by
> > >> this app/function.
> > >>
> > >> I am more concerned by the general case than the test app.
> > >> I think a function returning a number is more app-friendly.
> > >
> > > [Anoob] Indeed. But there are 3 LIST_ENDs removed with this patch. Do
> you propose 3 new APIs to just get max number?
> >
> > 1 API returning a single "info" structure perhaps - as being the most
> extensible?
>
> Or 3 iterators (foreach construct).
> Instead of just returning a size, we can have an iterator for each enum which
> needs to be iterated.
[Anoob] Something like this?
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 847b074a4f..68a6197851 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -542,7 +542,7 @@ test_one_case(const void *test_case, int sessionless)
printf(" %u) TestCase %s %s\n", test_index++,
tc.modex.description, test_msg);
} else {
- for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
+ RTE_CRYPTO_ASYM_FOREACH_OP_TYPE(i) {
if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
if (tc.rsa_data.op_type_flags & (1 << i)) {
if (tc.rsa_data.key_exp) {
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9c866f553f..5627dcaff1 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -119,6 +119,11 @@ enum rte_crypto_asym_op_type {
RTE_CRYPTO_ASYM_OP_LIST_END
};
+#define RTE_CRYPTO_ASYM_FOREACH_OP_TYPE(i) \
+ for (i = RTE_CRYPTO_ASYM_OP_ENCRYPT; \
+ i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; \
+ i++)
+
/**
* Padding types for RSA signature.
*/
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
2021-10-13 2:36 4% ` Dmitry Kozlyuk
@ 2021-10-13 3:12 0% ` Peng, ZhihongX
0 siblings, 0 replies; 200+ results
From: Peng, ZhihongX @ 2021-10-13 3:12 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: olivier.matz, dev, stable
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Wednesday, October 13, 2021 10:36 AM
> To: Peng, ZhihongX <zhihongx.peng@intel.com>
> Cc: olivier.matz@6wind.com; dev@dpdk.org; stable@dpdk.org
> Subject: Re: [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
>
> 2021-10-13 01:53 (UTC+0000), Peng, ZhihongX:
> > > -----Original Message-----
> > > From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > > Sent: Monday, October 11, 2021 4:26 PM
> > > To: Peng, ZhihongX <zhihongx.peng@intel.com>
> > > Cc: olivier.matz@6wind.com; dev@dpdk.org; stable@dpdk.org
> > > Subject: Re: [PATCH v3 1/2] lib/cmdline: release cl when cmdline
> > > exit
> > >
> > > 2021-10-08 06:41 (UTC+0000), zhihongx.peng@intel.com:
> > > > From: Zhihong Peng <zhihongx.peng@intel.com>
> > > >
> > > > Malloc cl in the cmdline_stdin_new function, so release in the
> > > > cmdline_stdin_exit function is logical, so that cl will not be
> > > > released alone.
> > > >
> > > > Fixes: af75078fece3 (first public release)
> > > > Cc: stable@dpdk.org
> > >
> > > As I have explained before, backporting this will introduce a
> > > double-free bug in user apps unless their code are fixed, so it must not
> be done.
> >
> > The release notes have stated that this is the only thing we can do,
> > and this unreasonable design should be resolved as soon as possible.
> > And the user apps change is very small.
>
> Stable release means stable ABI, which means that a compiled binary can use
> the next minor version of DPDK without recompilation. No code change is
> possible in this scenario. If the behavior changes such that cmdline_exit() +
> cmdline_free() worked before and now cmdline_free() cause double-free,
> this is an ABI breakage. Simply put, DPDK .so are replaced, the app restarts
> and crashes. Users can do nothing about that.
>
> Release notes are for developers updating their application code for the next
> DPDK version.
I may not understand what you mean. I want to know whether this code
can be merged, and if it can be merged, what work do I need to do.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
2021-10-13 1:53 0% ` Peng, ZhihongX
@ 2021-10-13 2:36 4% ` Dmitry Kozlyuk
2021-10-13 3:12 0% ` Peng, ZhihongX
0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-10-13 2:36 UTC (permalink / raw)
To: Peng, ZhihongX; +Cc: olivier.matz, dev, stable
2021-10-13 01:53 (UTC+0000), Peng, ZhihongX:
> > -----Original Message-----
> > From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > Sent: Monday, October 11, 2021 4:26 PM
> > To: Peng, ZhihongX <zhihongx.peng@intel.com>
> > Cc: olivier.matz@6wind.com; dev@dpdk.org; stable@dpdk.org
> > Subject: Re: [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
> >
> > 2021-10-08 06:41 (UTC+0000), zhihongx.peng@intel.com:
> > > From: Zhihong Peng <zhihongx.peng@intel.com>
> > >
> > > Malloc cl in the cmdline_stdin_new function, so release in the
> > > cmdline_stdin_exit function is logical, so that cl will not be
> > > released alone.
> > >
> > > Fixes: af75078fece3 (first public release)
> > > Cc: stable@dpdk.org
> >
> > As I have explained before, backporting this will introduce a double-free bug
> > in user apps unless their code are fixed, so it must not be done.
>
> The release notes have stated that this is the only thing we can do,
> and this unreasonable design should be resolved as soon as possible.
> And the user apps change is very small.
Stable release means stable ABI, which means that a compiled binary can use
the next minor version of DPDK without recompilation. No code change is
possible in this scenario. If the behavior changes such that cmdline_exit() +
cmdline_free() worked before and now cmdline_free() cause double-free, this
is an ABI breakage. Simply put, DPDK .so are replaced, the app restarts and
crashes. Users can do nothing about that.
Release notes are for developers updating their application code for the next
DPDK version.
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 1/2] lib/cmdline: release cl when cmdline exit
2021-10-08 6:41 4% ` [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit zhihongx.peng
2021-10-11 5:20 0% ` Peng, ZhihongX
2021-10-11 8:25 0% ` Dmitry Kozlyuk
@ 2021-10-13 1:52 4% ` zhihongx.peng
2 siblings, 0 replies; 200+ results
From: zhihongx.peng @ 2021-10-13 1:52 UTC (permalink / raw)
To: olivier.matz, dmitry.kozliuk; +Cc: dev, Zhihong Peng, stable
From: Zhihong Peng <zhihongx.peng@intel.com>
Malloc cl in the cmdline_stdin_new function, so release in the
cmdline_stdin_exit function is logical, so that cl will not be
released alone.
Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 3 +++
lib/cmdline/cmdline_socket.c | 1 +
2 files changed, 4 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f643a61f44..2f59077709 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -226,6 +226,9 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* cmdline: ``cmdline_stdin_exit()`` now frees the ``cmdline`` structure.
+ Calls to ``cmdline_free()`` after it need to be deleted from applications.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
index 998e8ade25..ebd5343754 100644
--- a/lib/cmdline/cmdline_socket.c
+++ b/lib/cmdline/cmdline_socket.c
@@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
return;
terminal_restore(cl);
+ cmdline_free(cl);
}
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
2021-10-11 8:25 0% ` Dmitry Kozlyuk
@ 2021-10-13 1:53 0% ` Peng, ZhihongX
2021-10-13 2:36 4% ` Dmitry Kozlyuk
0 siblings, 1 reply; 200+ results
From: Peng, ZhihongX @ 2021-10-13 1:53 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: olivier.matz, dev, stable
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Monday, October 11, 2021 4:26 PM
> To: Peng, ZhihongX <zhihongx.peng@intel.com>
> Cc: olivier.matz@6wind.com; dev@dpdk.org; stable@dpdk.org
> Subject: Re: [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
>
> 2021-10-08 06:41 (UTC+0000), zhihongx.peng@intel.com:
> > From: Zhihong Peng <zhihongx.peng@intel.com>
> >
> > Malloc cl in the cmdline_stdin_new function, so release in the
> > cmdline_stdin_exit function is logical, so that cl will not be
> > released alone.
> >
> > Fixes: af75078fece3 (first public release)
> > Cc: stable@dpdk.org
>
> As I have explained before, backporting this will introduce a double-free bug
> in user apps unless their code are fixed, so it must not be done.
The release notes have stated that this is the only thing we can do,
and this unreasonable design should be resolved as soon as possible.
And the user apps change is very small.
> >
> > Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
> > ---
> > doc/guides/rel_notes/release_21_11.rst | 5 +++++
> > lib/cmdline/cmdline_socket.c | 1 +
> > 2 files changed, 6 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > b/doc/guides/rel_notes/release_21_11.rst
> > index efeffe37a0..be24925d16 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -191,6 +191,11 @@ API Changes
> > the crypto/security operation. This field will be used to communicate
> > events such as soft expiry with IPsec in lookaside mode.
> >
> > +* cmdline: The API cmdline_stdin_exit has added cmdline_free function.
> > + Malloc cl in the cmdline_stdin_new function, so release in the
> > + cmdline_stdin_exit function is logical. The application code
> > + that calls cmdline_free needs to be deleted.
> > +
>
> There's probably no need to go into such details, suggestion:
>
> * cmdline: ``cmdline_stdin_exit()`` now frees the ``cmdline`` structure.
> Calls to ``cmdline_free()`` after it need to be deleted from applications.
v4 version will be fixed.
> >
> > ABI Changes
> > -----------
> > diff --git a/lib/cmdline/cmdline_socket.c
> > b/lib/cmdline/cmdline_socket.c index 998e8ade25..ebd5343754 100644
> > --- a/lib/cmdline/cmdline_socket.c
> > +++ b/lib/cmdline/cmdline_socket.c
> > @@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
> > return;
> >
> > terminal_restore(cl);
> > + cmdline_free(cl);
> > }
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
2021-10-12 19:26 3% ` Walker, Benjamin
@ 2021-10-12 21:50 3% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-12 21:50 UTC (permalink / raw)
To: Harris, James R, Walker, Benjamin
Cc: Liu, Changpeng, Xia, Chenbo, David Marchand, dev, Aaron Conole,
Zawadzki, Tomasz
12/10/2021 21:26, Walker, Benjamin:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > 12/10/2021 18:59, Walker, Benjamin:
> > > For networking drivers, maybe. But certainly years and years ago when SPDK
> > was started no one recommended putting an nvme driver into DPDK.
> >
> > No one from SPDK project proposed such thing.
> > I asked several times in person why that, and even the DPDK techboard asked for
> > such a merge:
> > https://mails.dpdk.org/archives/dev/2018-December/120706.html
> > The reply:
> > http://inbox.dpdk.org/dev/20181217141030.bhe5pwlqnzb3w3i7@platinum/
> > Even older question in 2015:
> > http://inbox.dpdk.org/dev/6421280.5XkMhqyP4M@xps13/
> >
>
> For my part in these discussions, it was always about merging the governance of the projects rather than the code. I don't think a merger even occurred to anyone I spoke with during that - certainly it didn't to me. SPDK is huge and beyond its use of EAL/PCI doesn't share much in common with the rest of DPDK (SPDK uses lightweight green threading, all virtual addresses, etc.). Anyway, as I pointed out one of our key use cases for several users is the ability to replace DPDK entirely, so merging isn't an option.
OK I understand, that's clear.
I would be interesting to know if the NVMe drivers could be split in two parts:
one part in DPDK, and the other part in SPDK for the non-DPDK case.
I ask because it may ease things for DPDK integration in SPDK.
There is probably a cost for the SPDK project, so it could be interesting
to compare pros and cons, if possible at all.
> > > This means that a distro-packaged SPDK cannot exist, because it cannot use a
> > distro-packaged DPDK as a dependency.
> >
> > I don't think so.
> > Once SPDK is packaged, what do you need from DPDK?
> > I think you need only .so files for some libs like EAL and PCI, so that's available in
> > the DPDK package, right?
> >
>
> So is DPDK committed to maintaining the existing ABI,
> such that the necessary symbols are still exported
> even when enable_driver_sdk is off?
Symbols required by drivers are necessarily exported.
Do you think I am missing something?
Do you need EAL internal functions?
We should check which functions are called by SPDK,
because there is a trend to export less functions if not needed.
> This option will, into the foreseeable future,
> only impact the installation of those header files?
I don't see what else it could impact.
> If that's the case, we can just copy the header file into SPDK,
> as could anyone else that wants to continue using DPDK
> to implement out of tree drivers.
> Can you clarify if something like this scheme would be considered a supported use of DPDK?
DPDK can be used by anybody as far as the (permissive) license is respected.
I consider copying files as a source of sync issues, but you are free.
In order to be perfectly clear, all the changes done
around this option enable_driver_sdk share the goal of tidying stuff
in DPDK so that ABI becomes better manageable.
I think that nobody want to annoy the SPDK project.
I understand that the changes effectively add troubles, and I am sorry
about that. If SPDK and other projects can manage with this change, good.
If there is a real blocker, we should discuss what are the options.
Thanks for your understanding
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 1/5] ethdev: update modify field flow action
@ 2021-10-12 20:25 3% ` Viacheslav Ovsiienko
0 siblings, 0 replies; 200+ results
From: Viacheslav Ovsiienko @ 2021-10-12 20:25 UTC (permalink / raw)
To: dev; +Cc: rasland, matan, shahafs, orika, getelson, thomas
The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:
- immediate source can be presented either as an unsigned
64-bit integer or pointer to data pattern in memory.
There was no explicit pointer field defined in the union.
- the byte ordering for 64-bit integer was not specified.
Many fields have shorter lengths and byte ordering
is crucial.
- how the bit offset is applied to the immediate source
field was not defined and documented.
- 64-bit integer size is not enough to provide IPv6
addresses.
In order to cover the issues and exclude any ambiguities
the following is done:
- introduce the explicit pointer field
in rte_flow_action_modify_data structure
- replace the 64-bit unsigned integer with 16-byte array
- update the modify field flow action documentation
Appropriate deprecation notice has been removed.
[1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
Fixes: 2ba49b5f3721 ("doc: announce change to ethdev modify action data")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
doc/guides/prog_guide/rte_flow.rst | 24 +++++++++++++++++++++++-
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_21_11.rst | 7 +++++++
lib/ethdev/rte_flow.h | 16 ++++++++++++----
4 files changed, 42 insertions(+), 9 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..b08087511f 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2835,6 +2835,22 @@ a packet to any other part of it.
``value`` sets an immediate value to be used as a source or points to a
location of the value in memory. It is used instead of ``level`` and ``offset``
for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+The data in memory should be presented exactly in the same byte order and
+length as in the relevant flow item, i.e. data for field with type
+``RTE_FLOW_FIELD_MAC_DST`` should follow the conventions of ``dst`` field
+in ``rte_flow_item_eth`` structure, with type ``RTE_FLOW_FIELD_IPV6_SRC`` -
+``rte_flow_item_ipv6`` conventions, and so on. If the field size is larger than
+16 bytes the pattern can be provided as pointer only.
+
+The bitfield extracted from the memory being applied as second operation
+parameter is defined by action width and by the destination field offset.
+Application should provide the data in immediate value memory (either as
+buffer or by pointer) exactly as item field without any applied explicit offset,
+and destination packet field (with specified width and bit offset) will be
+replaced by immediate source bits from the same bit offset. For example,
+to replace the third byte of MAC address with value 0x85, application should
+specify destination width as 8, destination offset as 16, and provide immediate
+value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
.. _table_rte_flow_action_modify_field:
@@ -2865,7 +2881,13 @@ for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+---------------+----------------------------------------------------------+
| ``offset`` | number of bits to skip at the beginning |
+---------------+----------------------------------------------------------+
- | ``value`` | immediate value or a pointer to this value |
+ | ``value`` | immediate value buffer (source field only, not |
+ | | applicable to destination) for RTE_FLOW_FIELD_VALUE |
+ | | field type |
+ +---------------+----------------------------------------------------------+
+ | ``pvalue`` | pointer to immediate value data (source field only, not |
+ | | applicable to destination) for RTE_FLOW_FIELD_POINTER |
+ | | field type |
+---------------+----------------------------------------------------------+
Action: ``CONNTRACK``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b..dee14077a5 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -120,10 +120,6 @@ Deprecation Notices
* ethdev: Announce moving from dedicated modify function for each field,
to using the general ``rte_flow_modify_field`` action.
-* ethdev: The struct ``rte_flow_action_modify_data`` will be modified
- to support modifying fields larger than 64 bits.
- In addition, documentation will be updated to clarify byte order.
-
* ethdev: Attribute ``shared`` of the ``struct rte_flow_action_count``
is deprecated and will be removed in DPDK 21.11. Shared counters should
be managed using shared actions API (``rte_flow_shared_action_create`` etc).
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..578c1206e7 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -187,6 +187,13 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* ethdev: ``rte_flow_action_modify_data`` structure updated, immediate data
+ array is extended, data pointer field is explicitly added to union, the
+ action behavior is defined in more strict fashion and documentation updated.
+ The immediate value behavior has been changed, the entire immediate field
+ should be provided, and offset for immediate source bitfield is assigned
+ from destination one.
+
ABI Changes
-----------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..f14f77772b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3217,10 +3217,18 @@ struct rte_flow_action_modify_data {
uint32_t offset;
};
/**
- * Immediate value for RTE_FLOW_FIELD_VALUE or
- * memory address for RTE_FLOW_FIELD_POINTER.
+ * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
+ * same byte order and length as in relevant rte_flow_item_xxx.
+ * The immediate source bitfield offset is inherited from
+ * the destination's one.
*/
- uint64_t value;
+ uint8_t value[16];
+ /**
+ * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
+ * should be the same as for relevant field in the
+ * rte_flow_item_xxx structure.
+ */
+ void *pvalue;
};
};
@@ -3240,7 +3248,7 @@ enum rte_flow_modify_op {
* RTE_FLOW_ACTION_TYPE_MODIFY_FIELD
*
* Modify a destination header field according to the specified
- * operation. Another packet field can be used as a source as well
+ * operation. Another field of the packet can be used as a source as well
* as tag, mark, metadata, immediate value or a pointer to it.
*/
struct rte_flow_action_modify_field {
--
2.18.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
@ 2021-10-12 19:26 3% ` Walker, Benjamin
2021-10-12 21:50 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Walker, Benjamin @ 2021-10-12 19:26 UTC (permalink / raw)
To: Thomas Monjalon, Liu, Changpeng, Xia, Chenbo, Harris, James R
Cc: David Marchand, dev, Aaron Conole, Zawadzki, Tomasz
> From: Thomas Monjalon <thomas@monjalon.net>
> 12/10/2021 18:59, Walker, Benjamin:
> > For networking drivers, maybe. But certainly years and years ago when SPDK
> was started no one recommended putting an nvme driver into DPDK.
>
> No one from SPDK project proposed such thing.
> I asked several times in person why that, and even the DPDK techboard asked for
> such a merge:
> https://mails.dpdk.org/archives/dev/2018-December/120706.html
> The reply:
> http://inbox.dpdk.org/dev/20181217141030.bhe5pwlqnzb3w3i7@platinum/
> Even older question in 2015:
> http://inbox.dpdk.org/dev/6421280.5XkMhqyP4M@xps13/
>
For my part in these discussions, it was always about merging the governance of the projects rather than the code. I don't think a merger even occurred to anyone I spoke with during that - certainly it didn't to me. SPDK is huge and beyond its use of EAL/PCI doesn't share much in common with the rest of DPDK (SPDK uses lightweight green threading, all virtual addresses, etc.). Anyway, as I pointed out one of our key use cases for several users is the ability to replace DPDK entirely, so merging isn't an option.
> > This means that a distro-packaged SPDK cannot exist, because it cannot use a
> distro-packaged DPDK as a dependency.
>
> I don't think so.
> Once SPDK is packaged, what do you need from DPDK?
> I think you need only .so files for some libs like EAL and PCI, so that's available in
> the DPDK package, right?
>
So is DPDK committed to maintaining the existing ABI, such that the necessary symbols are still exported even when enable_driver_sdk is off? This option will, into the foreseeable future, only impact the installation of those header files? If that's the case, we can just copy the header file into SPDK, as could anyone else that wants to continue using DPDK to implement out of tree drivers. Can you clarify if something like this scheme would be considered a supported use of DPDK?
Thanks,
Ben
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v12 00/12] Packet capture framework update
2021-10-12 18:00 3% ` Stephen Hemminger
@ 2021-10-12 18:22 0% ` Thomas Monjalon
2021-10-13 8:44 0% ` Pattan, Reshma
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-12 18:22 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Pattan, Reshma, dev, Richardson, Bruce, david.marchand
12/10/2021 20:00, Stephen Hemminger:
> On Tue, 12 Oct 2021 17:48:47 +0200
> Thomas Monjalon <thomas@monjalon.net> wrote:
> > 12/10/2021 17:44, Stephen Hemminger:
> > > On Tue, 12 Oct 2021 10:21:41 +0000
> > > "Pattan, Reshma" <reshma.pattan@intel.com> wrote:
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > >
> > > > > I was hoping to see a feedback from the current maintainer, but it seems you
> > > > > didn't Cc her... Reshma, are you aware of these patches?
> > > >
> > > > I was aware of v10 where I had comments, I will take a look at V12.
> > > > Yes, please add me to CC for future patch sets, that would help me to not miss them.
> > >
> > > This means we have a flawed process if patches can't get
> > > reviewed that have been submitted a month ahead of release.
> >
> > Part of the process, you are supposed to use "--cc-cmd devtools/get-maintainer.sh"
> > so maintainers are Cc'ed.
>
> Thought they were, look like Reshma got missed.
>
> No worries about doing in 22.02 since there is no API/ABI breakage in the
> patchset. No pre-release note needed either.
We can still merge it for 21.11 if Reshma is OK with the v12.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v12 00/12] Packet capture framework update
@ 2021-10-12 18:00 3% ` Stephen Hemminger
2021-10-12 18:22 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-10-12 18:00 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Pattan, Reshma, dev, Richardson, Bruce, david.marchand
On Tue, 12 Oct 2021 17:48:47 +0200
Thomas Monjalon <thomas@monjalon.net> wrote:
> 12/10/2021 17:44, Stephen Hemminger:
> > On Tue, 12 Oct 2021 10:21:41 +0000
> > "Pattan, Reshma" <reshma.pattan@intel.com> wrote:
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > >
> > > > I was hoping to see a feedback from the current maintainer, but it seems you
> > > > didn't Cc her... Reshma, are you aware of these patches?
> > >
> > > I was aware of v10 where I had comments, I will take a look at V12.
> > > Yes, please add me to CC for future patch sets, that would help me to not miss them.
> >
> > This means we have a flawed process if patches can't get
> > reviewed that have been submitted a month ahead of release.
>
> Part of the process, you are supposed to use "--cc-cmd devtools/get-maintainer.sh"
> so maintainers are Cc'ed.
Thought they were, look like Reshma got missed.
No worries about doing in 22.02 since there is no API/ABI breakage in the
patchset. No pre-release note needed either.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count
2021-10-07 11:27 6% ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-11 8:06 0% ` Andrew Rybchenko
@ 2021-10-12 17:59 0% ` Hyong Youb Kim (hyonkim)
1 sibling, 0 replies; 200+ results
From: Hyong Youb Kim (hyonkim) @ 2021-10-12 17:59 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, John Daley (johndale),
qi.z.zhang, xiao.w.wang, humin29, yisen.zhuang, oulijun,
beilei.xing, jingjing.wu, qiming.yang, matan, viacheslavo,
sthemmin, longli, heinrich.kuhn, kirankumark, andrew.rybchenko,
mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
ferruh.yigit, mdr, jay.jayatheerthan
> -----Original Message-----
> From: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Sent: Thursday, October 7, 2021 8:28 PM
[...]
> Subject: [PATCH v5 3/7] ethdev: change input parameters for
> rx_queue_count
>
> Currently majority of fast-path ethdev ops take pointers to internal
> queue data structures as an input parameter.
> While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
> index.
> For future work to hide rte_eth_devices[] and friends it would be
> plausible to unify parameters list of all fast-path ethdev ops.
> This patch changes eth_rx_queue_count() to accept pointer to internal
> queue data as input parameter.
> While this change is transparent to user, it still counts as an ABI change,
> as eth_rx_queue_count_t is used by ethdev public inline function
> rte_eth_rx_queue_count().
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
For net/enic,
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Thanks.
-Hyong
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16 check
2021-10-12 16:36 4% ` Chautru, Nicolas
@ 2021-10-12 16:59 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-12 16:59 UTC (permalink / raw)
To: Chautru, Nicolas
Cc: gakhil, dev, trix, hemant.agrawal, Zhang, Mingshan, Yigit, Ferruh
12/10/2021 18:36, Chautru, Nicolas:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 11/10/2021 22:38, Chautru, Nicolas:
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > 13/08/2021 18:51, Nicolas Chautru:
> > > > > Adding a missing operation when CRC16 is being used for TB CRC
> > > > > check.
> > > > >
> > > > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > > > ---
> > > > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > > > @@ -84,6 +84,7 @@ API Changes
> > > > > Also, make sure to start the actual text at the margin.
> > > > >
> > =======================================================
> > > > >
> > > > > +* bbdev: Added capability related to more comprehensive CRC
> > options.
> > > >
> > > > That's not an API change, the enum symbols are the same.
> > > > Only enum values are changed so it impacts only ABI.
> > >
> > > Hi Thomas,
> > > How is that not a API change when new additional capability are exposed?
> > Ie. new enums defined for new capabilities.
> >
> > API change is when the app source code has to be updated.
>
> Thanks. What you are referring to may be strictly API breakage as opposed to generic API change. I would expect an API change could be either backward compatible (extending API but application only has to change if it wants to use the new functionality) vs an actual API breakage (application needs to change regardless even to keep same functionality as before).
Yes
An API change which does not break is a new feature, so it is referenced
at the beginning of the release notes in general.
> In case the intent is to use the 2 terms interchangeably (change vs breakage) then I agree that these 2 bbdev changes do not constitute an API breakage (only ABI).
> It might be good to capture this more explicitly except if you believe this is obvious (doc describes ABI change, not API change). Regardless for next time I will use that distinction (change == breakage).
Yes feel free to send a patch to rename "change" to 'breakage".
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16 check
2021-10-12 6:53 3% ` Thomas Monjalon
@ 2021-10-12 16:36 4% ` Chautru, Nicolas
2021-10-12 16:59 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2021-10-12 16:36 UTC (permalink / raw)
To: Thomas Monjalon
Cc: gakhil, dev, trix, hemant.agrawal, Zhang, Mingshan, Yigit, Ferruh
Hi Thomas,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, October 11, 2021 11:53 PM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>
> Cc: gakhil@marvell.com; dev@dpdk.org; trix@redhat.com;
> hemant.agrawal@nxp.com; Zhang, Mingshan <mingshan.zhang@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16
> check
>
> 11/10/2021 22:38, Chautru, Nicolas:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 13/08/2021 18:51, Nicolas Chautru:
> > > > Adding a missing operation when CRC16 is being used for TB CRC
> > > > check.
> > > >
> > > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > > ---
> > > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > > @@ -84,6 +84,7 @@ API Changes
> > > > Also, make sure to start the actual text at the margin.
> > > >
> =======================================================
> > > >
> > > > +* bbdev: Added capability related to more comprehensive CRC
> options.
> > >
> > > That's not an API change, the enum symbols are the same.
> > > Only enum values are changed so it impacts only ABI.
> >
> > Hi Thomas,
> > How is that not a API change when new additional capability are exposed?
> Ie. new enums defined for new capabilities.
>
> API change is when the app source code has to be updated.
Thanks. What you are referring to may be strictly API breakage as opposed to generic API change. I would expect an API change could be either backward compatible (extending API but application only has to change if it wants to use the new functionality) vs an actual API breakage (application needs to change regardless even to keep same functionality as before).
In case the intent is to use the 2 terms interchangeably (change vs breakage) then I agree that these 2 bbdev changes do not constitute an API breakage (only ABI).
It might be good to capture this more explicitly except if you believe this is obvious (doc describes ABI change, not API change). Regardless for next time I will use that distinction (change == breakage).
Thanks
> ABI change is when the app binary has to be rebuilt.
>
> > I think I see other similar cases in the same release notes " * cryptodev:
> ``RTE_CRYPTO_AEAD_LIST_END`` from ``enum rte_crypto_aead_algo ...".
>
> I don't see this one.
>
> > You know best, just checking the intent, maybe worth clarifying the
> guideline except in case this is just me.
>
> Given my explanation above, how would you classify your change?
>
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 14:47 0% ` Kinsella, Ray
@ 2021-10-12 15:06 0% ` Thomas Monjalon
2021-10-13 5:36 0% ` Anoob Joseph
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-12 15:06 UTC (permalink / raw)
To: Anoob Joseph, Akhil Goyal, dev, Kinsella, Ray
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh, bruce.richardson
12/10/2021 16:47, Kinsella, Ray:
> On 12/10/2021 15:18, Anoob Joseph wrote:
> > From: Thomas Monjalon <thomas@monjalon.net>
> >> 12/10/2021 15:38, Anoob Joseph:
> >>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>> 12/10/2021 13:34, Anoob Joseph:
> >>>>> From: Kinsella, Ray <mdr@ashroe.eu>
> >>>>>> On 12/10/2021 11:50, Anoob Joseph wrote:
> >>>>>>> From: Akhil Goyal <gakhil@marvell.com>
> >>>>>>>>> On 08/10/2021 21:45, Akhil Goyal wrote:
> >>>>>>>>>> Remove *_LIST_END enumerators from asymmetric crypto lib to
> >>>>>>>>>> avoid ABI breakage for every new addition in enums.
> >>>>>>>>>>
> >>>>>>>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> >>>>>>>>>> ---
> >>>>>>>>>> - } else if (xform->xform_type >=
> >>>>>>>>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> >>>>>>>>>> + } else if (xform->xform_type >
> >>>> RTE_CRYPTO_ASYM_XFORM_ECPM
> >>>> [...]
> >>>>>>>>>
> >>>>>>>>> So I am not sure that this is an improvement.
> >>>>
> >>>> Indeed, it is not an improvement.
> >>>>
> >>>>>>>>> The cryptodev issue we had, was that _LIST_END was being
> >>>>>>>>> used to size arrays.
> >>>>>>>>> And that broke when new algorithms got added. Is that an
> >>>>>>>>> issue, in this
> >>>>>> case?
> >>>>>>>>
> >>>>>>>> Yes we did this same exercise for symmetric crypto enums earlier.
> >>>>>>>> Asym enums were left as it was experimental at that point.
> >>>>>>>> They are still experimental, but thought of making this
> >>>>>>>> uniform throughout DPDK enums.
> >>>>>>>>
> >>>>>>>>>
> >>>>>>>>> I am not sure that swapping out _LIST_END, and then
> >>>>>>>>> littering the code with RTE_CRYPTO_ASYM_XFORM_ECPM and
> >>>>>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
> >>>> improvement
> >>>>>>>> here.
> >>>>>>>>>
> >>>>>>>>> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is
> >>>>>>>>> not better or worse, than
> >>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> >>>>>>>>>
> >>>>>>>>> Interested to hear other thoughts.
> >>>>>>>>
> >>>>>>>> I don’t have any better solution for avoiding ABI issues for now.
> >>>>>>>> The change is for avoiding ABI breakage. But we can drop this
> >>>>>>>> patch For now as asym is still experimental.
> >>>>>>>
> >>>>>>> [Anoob] Having LIST_END would preclude new additions to
> >>>>>>> asymmetric
> >>>> algos?
> >>>>>> If yes, then I would suggest we address it now.
> >>>>>>
> >>>>>> Not at all - but it can be problematic, if two versions of DPDK
> >>>>>> disagree with the value of LIST_END.
> >>>>>>
> >>>>>>> Looking at the "problematic changes", we only have 2-3
> >>>>>>> application & PMD changes. For unit test application, we could
> >>>>>>> may be do something like,
> >>>>>>
> >>>>>> The essental functionality not that different, I am just not
> >>>>>> sure that the verbosity below is helping.
> >>>>>> What you are really trying to guard against is people using
> >>>>>> LIST_END to size arrays.
> >>>>>
> >>>>> [Anoob] Our problem is application using LIST_END (which comes
> >>>>> from library)
> >>>> to determine the number of iterations for the loop. My suggestion is
> >>>> to modify the UT such that, we could use RTE_DIM(types) (which comes
> >>>> from application) to determine iterations of loop. This would solve the
> >> problem, right?
> >>>>
> >>>> The problem is not the application.
> >>>> Are you asking the app to define DPDK types?
> >>>
> >>> [Anoob] I didn't understand how you concluded that.
> >>
> >> Because you define a specific array in the test app.
> >>
> >>> The app is supposed to test "n" asymmetric features supported by DPDK.
> >> Currently, it does that by looping from 0 to LIST_END which happens to give you
> >> the first n features. Now, if we add any new asymmetric feature, LIST_END
> >> value would change. Isn't that the very reason why we removed LIST_END from
> >> symmetric library and applications?
> >>
> >> Yes
> >>
> >>> Now coming to what I proposed, the app is supposed to test "n" asymmetric
> >> features. LIST_END helps in doing the loops. If we remove LIST_END, then
> >> application will not be in a position to do a loop. My suggestion is, we list the
> >> types that are supposed to be tested by the app, and let that array be used as
> >> feature list.
> >>>
> >>> PS: Just to reiterate, my proposal is just a local array which would hold DPDK
> >> defined RTE enum values for the features that would be tested by this
> >> app/function.
> >>
> >> I am more concerned by the general case than the test app.
> >> I think a function returning a number is more app-friendly.
> >
> > [Anoob] Indeed. But there are 3 LIST_ENDs removed with this patch. Do you propose 3 new APIs to just get max number?
>
> 1 API returning a single "info" structure perhaps - as being the most extensible?
Or 3 iterators (foreach construct).
Instead of just returning a size, we can have an iterator for each enum
which needs to be iterated.
Feel free to consider the alternative which fits the best in cryptodev.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 14:18 0% ` Anoob Joseph
@ 2021-10-12 14:47 0% ` Kinsella, Ray
2021-10-12 15:06 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-12 14:47 UTC (permalink / raw)
To: Anoob Joseph, Thomas Monjalon, Akhil Goyal, dev
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh, bruce.richardson
On 12/10/2021 15:18, Anoob Joseph wrote:
> Hi Thomas,
>
> Please see inline.
>
> Thanks,
> Anoob
>
>> -----Original Message-----
>> From: Thomas Monjalon <thomas@monjalon.net>
>> Sent: Tuesday, October 12, 2021 7:25 PM
>> To: Kinsella, Ray <mdr@ashroe.eu>; Akhil Goyal <gakhil@marvell.com>;
>> dev@dpdk.org; Anoob Joseph <anoobj@marvell.com>
>> Cc: david.marchand@redhat.com; hemant.agrawal@nxp.com;
>> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
>> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
>> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
>> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
>> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj Rottela
>> <rnagadheeraj@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>;
>> ciara.power@intel.com; Stephen Hemminger <stephen@networkplumber.org>;
>> Yigit, Ferruh <ferruh.yigit@intel.com>; bruce.richardson@intel.com
>> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END
>> enumerators
>>
>> 12/10/2021 15:38, Anoob Joseph:
>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>> 12/10/2021 13:34, Anoob Joseph:
>>>>> From: Kinsella, Ray <mdr@ashroe.eu>
>>>>>> On 12/10/2021 11:50, Anoob Joseph wrote:
>>>>>>> From: Akhil Goyal <gakhil@marvell.com>
>>>>>>>>> On 08/10/2021 21:45, Akhil Goyal wrote:
>>>>>>>>>> Remove *_LIST_END enumerators from asymmetric crypto lib to
>>>>>>>>>> avoid ABI breakage for every new addition in enums.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>>>>>>>>> ---
>>>>>>>>>> - } else if (xform->xform_type >=
>>>>>>>>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
>>>>>>>>>> + } else if (xform->xform_type >
>>>> RTE_CRYPTO_ASYM_XFORM_ECPM
>>>> [...]
>>>>>>>>>
>>>>>>>>> So I am not sure that this is an improvement.
>>>>
>>>> Indeed, it is not an improvement.
>>>>
>>>>>>>>> The cryptodev issue we had, was that _LIST_END was being
>>>>>>>>> used to size arrays.
>>>>>>>>> And that broke when new algorithms got added. Is that an
>>>>>>>>> issue, in this
>>>>>> case?
>>>>>>>>
>>>>>>>> Yes we did this same exercise for symmetric crypto enums earlier.
>>>>>>>> Asym enums were left as it was experimental at that point.
>>>>>>>> They are still experimental, but thought of making this
>>>>>>>> uniform throughout DPDK enums.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am not sure that swapping out _LIST_END, and then
>>>>>>>>> littering the code with RTE_CRYPTO_ASYM_XFORM_ECPM and
>>>>>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
>>>> improvement
>>>>>>>> here.
>>>>>>>>>
>>>>>>>>> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is
>>>>>>>>> not better or worse, than
>>>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
>>>>>>>>>
>>>>>>>>> Interested to hear other thoughts.
>>>>>>>>
>>>>>>>> I don’t have any better solution for avoiding ABI issues for now.
>>>>>>>> The change is for avoiding ABI breakage. But we can drop this
>>>>>>>> patch For now as asym is still experimental.
>>>>>>>
>>>>>>> [Anoob] Having LIST_END would preclude new additions to
>>>>>>> asymmetric
>>>> algos?
>>>>>> If yes, then I would suggest we address it now.
>>>>>>
>>>>>> Not at all - but it can be problematic, if two versions of DPDK
>>>>>> disagree with the value of LIST_END.
>>>>>>
>>>>>>> Looking at the "problematic changes", we only have 2-3
>>>>>>> application & PMD changes. For unit test application, we could
>>>>>>> may be do something like,
>>>>>>
>>>>>> The essental functionality not that different, I am just not
>>>>>> sure that the verbosity below is helping.
>>>>>> What you are really trying to guard against is people using
>>>>>> LIST_END to size arrays.
>>>>>
>>>>> [Anoob] Our problem is application using LIST_END (which comes
>>>>> from library)
>>>> to determine the number of iterations for the loop. My suggestion is
>>>> to modify the UT such that, we could use RTE_DIM(types) (which comes
>>>> from application) to determine iterations of loop. This would solve the
>> problem, right?
>>>>
>>>> The problem is not the application.
>>>> Are you asking the app to define DPDK types?
>>>
>>> [Anoob] I didn't understand how you concluded that.
>>
>> Because you define a specific array in the test app.
>>
>>> The app is supposed to test "n" asymmetric features supported by DPDK.
>> Currently, it does that by looping from 0 to LIST_END which happens to give you
>> the first n features. Now, if we add any new asymmetric feature, LIST_END
>> value would change. Isn't that the very reason why we removed LIST_END from
>> symmetric library and applications?
>>
>> Yes
>>
>>> Now coming to what I proposed, the app is supposed to test "n" asymmetric
>> features. LIST_END helps in doing the loops. If we remove LIST_END, then
>> application will not be in a position to do a loop. My suggestion is, we list the
>> types that are supposed to be tested by the app, and let that array be used as
>> feature list.
>>>
>>> PS: Just to reiterate, my proposal is just a local array which would hold DPDK
>> defined RTE enum values for the features that would be tested by this
>> app/function.
>>
>> I am more concerned by the general case than the test app.
>> I think a function returning a number is more app-friendly.
>
> [Anoob] Indeed. But there are 3 LIST_ENDs removed with this patch. Do you propose 3 new APIs to just get max number?
1 API returning a single "info" structure perhaps - as being the most extensible?
>
>>
>>>>>>> + enum rte_crypto_asym_op_type types[] = {
>>>
>>>>
>>>> The problem is in DPDK API. We must not suggest a size for enums.
>>>
>>> [Anoob] So agreed that LIST_END should be removed?
>>
>> Yes
>>
>>>> If we really need a size, then it must be explicit and updated in
>>>> the lib binary (through a function) when the size increases.
>>>
>>> [Anoob] Precisely my thoughts. The loop with LIST_END done in application is
>> not correct.
>>>>
>>>>
>>>>
>>>>>>> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
>>>>>>> + enum rte_crypto_asym_op_type types[] = {
>>>>>>> + RTE_CRYPTO_ASYM_OP_ENCRYPT,
>>>>>>> + RTE_CRYPTO_ASYM_OP_DECRYPT,
>>>>>>> + RTE_CRYPTO_ASYM_OP_SIGN,
>>>>>>> + RTE_CRYPTO_ASYM_OP_VERIFY,
>>>>>>> + RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
>>>>>>> + RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE,
>>>>>>> +
>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
>>>>>>> + };
>>>>>>> + for (i = 0; i <= RTE_DIM(types); i++) {
>>>>>>> if (tc.modex.xform_type ==
>>>> RTE_CRYPTO_ASYM_XFORM_RSA) {
>>>>>>> - if (tc.rsa_data.op_type_flags & (1 << i)) {
>>>>>>> + if (tc.rsa_data.op_type_flags
>>>>>>> + & (1 <<
>>>>>>> + types[i])) {
>>>>>>> if (tc.rsa_data.key_exp) {
>>>>>>> status = test_cryptodev_asym_op(
>>>>>>> &testsuite_params, &tc,
>>>>>>> - test_msg, sessionless, i,
>>>>>>> +
>>>>>>> + test_msg, sessionless, types[i],
>>>>>>> RTE_RSA_KEY_TYPE_EXP);
>>>>>>> }
>>>>>>> if (status)
>>>>>>> break;
>>>>>>> - if (tc.rsa_data.key_qt && (i ==
>>>>>>> + if (tc.rsa_data.key_qt
>>>>>>> + && (types[i] ==
>>>>>>> RTE_CRYPTO_ASYM_OP_DECRYPT ||
>>>>>>> - i == RTE_CRYPTO_ASYM_OP_SIGN)) {
>>>>>>> +
>>>>>>> + types[i] ==
>>>>>>> + RTE_CRYPTO_ASYM_OP_SIGN)) {
>>>>>>> status = test_cryptodev_asym_op(
>>>>>>> &testsuite_params,
>>>>>>> - &tc, test_msg, sessionless, i,
>>>>>>> + &tc,
>>>>>>> + test_msg, sessionless, types[i],
>>>>>>> RTE_RSA_KET_TYPE_QT);
>>>>>>> }
>>>>>>> if (status)
>>>>>>>
>>>>>>> This way, application would only use the ones which it is
>>>>>>> designed to work
>>>>>> with. For QAT driver changes, we could have an overload if
>>>>>> condition (if alg == x
>>>>>> || alg = y || ...) to get the same effect.
>>
>>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 13:54 0% ` Thomas Monjalon
@ 2021-10-12 14:18 0% ` Anoob Joseph
2021-10-12 14:47 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-10-12 14:18 UTC (permalink / raw)
To: Thomas Monjalon, Kinsella, Ray, Akhil Goyal, dev
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh, bruce.richardson
Hi Thomas,
Please see inline.
Thanks,
Anoob
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, October 12, 2021 7:25 PM
> To: Kinsella, Ray <mdr@ashroe.eu>; Akhil Goyal <gakhil@marvell.com>;
> dev@dpdk.org; Anoob Joseph <anoobj@marvell.com>
> Cc: david.marchand@redhat.com; hemant.agrawal@nxp.com;
> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj Rottela
> <rnagadheeraj@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>;
> ciara.power@intel.com; Stephen Hemminger <stephen@networkplumber.org>;
> Yigit, Ferruh <ferruh.yigit@intel.com>; bruce.richardson@intel.com
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END
> enumerators
>
> 12/10/2021 15:38, Anoob Joseph:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 12/10/2021 13:34, Anoob Joseph:
> > > > From: Kinsella, Ray <mdr@ashroe.eu>
> > > > > On 12/10/2021 11:50, Anoob Joseph wrote:
> > > > > > From: Akhil Goyal <gakhil@marvell.com>
> > > > > >>> On 08/10/2021 21:45, Akhil Goyal wrote:
> > > > > >>>> Remove *_LIST_END enumerators from asymmetric crypto lib to
> > > > > >>>> avoid ABI breakage for every new addition in enums.
> > > > > >>>>
> > > > > >>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > > > >>>> ---
> > > > > >>>> - } else if (xform->xform_type >=
> > > > > >>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > > > > >>>> + } else if (xform->xform_type >
> > > RTE_CRYPTO_ASYM_XFORM_ECPM
> > > [...]
> > > > > >>>
> > > > > >>> So I am not sure that this is an improvement.
> > >
> > > Indeed, it is not an improvement.
> > >
> > > > > >>> The cryptodev issue we had, was that _LIST_END was being
> > > > > >>> used to size arrays.
> > > > > >>> And that broke when new algorithms got added. Is that an
> > > > > >>> issue, in this
> > > > > case?
> > > > > >>
> > > > > >> Yes we did this same exercise for symmetric crypto enums earlier.
> > > > > >> Asym enums were left as it was experimental at that point.
> > > > > >> They are still experimental, but thought of making this
> > > > > >> uniform throughout DPDK enums.
> > > > > >>
> > > > > >>>
> > > > > >>> I am not sure that swapping out _LIST_END, and then
> > > > > >>> littering the code with RTE_CRYPTO_ASYM_XFORM_ECPM and
> > > > > >>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
> > > improvement
> > > > > >> here.
> > > > > >>>
> > > > > >>> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is
> > > > > >>> not better or worse, than
> > > > > RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> > > > > >>>
> > > > > >>> Interested to hear other thoughts.
> > > > > >>
> > > > > >> I don’t have any better solution for avoiding ABI issues for now.
> > > > > >> The change is for avoiding ABI breakage. But we can drop this
> > > > > >> patch For now as asym is still experimental.
> > > > > >
> > > > > > [Anoob] Having LIST_END would preclude new additions to
> > > > > > asymmetric
> > > algos?
> > > > > If yes, then I would suggest we address it now.
> > > > >
> > > > > Not at all - but it can be problematic, if two versions of DPDK
> > > > > disagree with the value of LIST_END.
> > > > >
> > > > > > Looking at the "problematic changes", we only have 2-3
> > > > > > application & PMD changes. For unit test application, we could
> > > > > > may be do something like,
> > > > >
> > > > > The essental functionality not that different, I am just not
> > > > > sure that the verbosity below is helping.
> > > > > What you are really trying to guard against is people using
> > > > > LIST_END to size arrays.
> > > >
> > > > [Anoob] Our problem is application using LIST_END (which comes
> > > > from library)
> > > to determine the number of iterations for the loop. My suggestion is
> > > to modify the UT such that, we could use RTE_DIM(types) (which comes
> > > from application) to determine iterations of loop. This would solve the
> problem, right?
> > >
> > > The problem is not the application.
> > > Are you asking the app to define DPDK types?
> >
> > [Anoob] I didn't understand how you concluded that.
>
> Because you define a specific array in the test app.
>
> > The app is supposed to test "n" asymmetric features supported by DPDK.
> Currently, it does that by looping from 0 to LIST_END which happens to give you
> the first n features. Now, if we add any new asymmetric feature, LIST_END
> value would change. Isn't that the very reason why we removed LIST_END from
> symmetric library and applications?
>
> Yes
>
> > Now coming to what I proposed, the app is supposed to test "n" asymmetric
> features. LIST_END helps in doing the loops. If we remove LIST_END, then
> application will not be in a position to do a loop. My suggestion is, we list the
> types that are supposed to be tested by the app, and let that array be used as
> feature list.
> >
> > PS: Just to reiterate, my proposal is just a local array which would hold DPDK
> defined RTE enum values for the features that would be tested by this
> app/function.
>
> I am more concerned by the general case than the test app.
> I think a function returning a number is more app-friendly.
[Anoob] Indeed. But there are 3 LIST_ENDs removed with this patch. Do you propose 3 new APIs to just get max number?
>
> > > > > > + enum rte_crypto_asym_op_type types[] = {
> >
> > >
> > > The problem is in DPDK API. We must not suggest a size for enums.
> >
> > [Anoob] So agreed that LIST_END should be removed?
>
> Yes
>
> > > If we really need a size, then it must be explicit and updated in
> > > the lib binary (through a function) when the size increases.
> >
> > [Anoob] Precisely my thoughts. The loop with LIST_END done in application is
> not correct.
> > >
> > >
> > >
> > > > > > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > > > > > + enum rte_crypto_asym_op_type types[] = {
> > > > > > + RTE_CRYPTO_ASYM_OP_ENCRYPT,
> > > > > > + RTE_CRYPTO_ASYM_OP_DECRYPT,
> > > > > > + RTE_CRYPTO_ASYM_OP_SIGN,
> > > > > > + RTE_CRYPTO_ASYM_OP_VERIFY,
> > > > > > + RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
> > > > > > + RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE,
> > > > > > +
> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> > > > > > + };
> > > > > > + for (i = 0; i <= RTE_DIM(types); i++) {
> > > > > > if (tc.modex.xform_type ==
> > > RTE_CRYPTO_ASYM_XFORM_RSA) {
> > > > > > - if (tc.rsa_data.op_type_flags & (1 << i)) {
> > > > > > + if (tc.rsa_data.op_type_flags
> > > > > > + & (1 <<
> > > > > > + types[i])) {
> > > > > > if (tc.rsa_data.key_exp) {
> > > > > > status = test_cryptodev_asym_op(
> > > > > > &testsuite_params, &tc,
> > > > > > - test_msg, sessionless, i,
> > > > > > +
> > > > > > + test_msg, sessionless, types[i],
> > > > > > RTE_RSA_KEY_TYPE_EXP);
> > > > > > }
> > > > > > if (status)
> > > > > > break;
> > > > > > - if (tc.rsa_data.key_qt && (i ==
> > > > > > + if (tc.rsa_data.key_qt
> > > > > > + && (types[i] ==
> > > > > > RTE_CRYPTO_ASYM_OP_DECRYPT ||
> > > > > > - i == RTE_CRYPTO_ASYM_OP_SIGN)) {
> > > > > > +
> > > > > > + types[i] ==
> > > > > > + RTE_CRYPTO_ASYM_OP_SIGN)) {
> > > > > > status = test_cryptodev_asym_op(
> > > > > > &testsuite_params,
> > > > > > - &tc, test_msg, sessionless, i,
> > > > > > + &tc,
> > > > > > + test_msg, sessionless, types[i],
> > > > > > RTE_RSA_KET_TYPE_QT);
> > > > > > }
> > > > > > if (status)
> > > > > >
> > > > > > This way, application would only use the ones which it is
> > > > > > designed to work
> > > > > with. For QAT driver changes, we could have an overload if
> > > > > condition (if alg == x
> > > > > || alg = y || ...) to get the same effect.
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 13:38 0% ` Anoob Joseph
@ 2021-10-12 13:54 0% ` Thomas Monjalon
2021-10-12 14:18 0% ` Anoob Joseph
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-12 13:54 UTC (permalink / raw)
To: Kinsella, Ray, Akhil Goyal, dev, Anoob Joseph
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh, bruce.richardson
12/10/2021 15:38, Anoob Joseph:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 12/10/2021 13:34, Anoob Joseph:
> > > From: Kinsella, Ray <mdr@ashroe.eu>
> > > > On 12/10/2021 11:50, Anoob Joseph wrote:
> > > > > From: Akhil Goyal <gakhil@marvell.com>
> > > > >>> On 08/10/2021 21:45, Akhil Goyal wrote:
> > > > >>>> Remove *_LIST_END enumerators from asymmetric crypto lib to
> > > > >>>> avoid ABI breakage for every new addition in enums.
> > > > >>>>
> > > > >>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > > >>>> ---
> > > > >>>> - } else if (xform->xform_type >=
> > > > >>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > > > >>>> + } else if (xform->xform_type >
> > RTE_CRYPTO_ASYM_XFORM_ECPM
> > [...]
> > > > >>>
> > > > >>> So I am not sure that this is an improvement.
> >
> > Indeed, it is not an improvement.
> >
> > > > >>> The cryptodev issue we had, was that _LIST_END was being used to
> > > > >>> size arrays.
> > > > >>> And that broke when new algorithms got added. Is that an issue,
> > > > >>> in this
> > > > case?
> > > > >>
> > > > >> Yes we did this same exercise for symmetric crypto enums earlier.
> > > > >> Asym enums were left as it was experimental at that point.
> > > > >> They are still experimental, but thought of making this uniform
> > > > >> throughout DPDK enums.
> > > > >>
> > > > >>>
> > > > >>> I am not sure that swapping out _LIST_END, and then littering
> > > > >>> the code with RTE_CRYPTO_ASYM_XFORM_ECPM and
> > > > >>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
> > improvement
> > > > >> here.
> > > > >>>
> > > > >>> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is not
> > > > >>> better or worse, than
> > > > RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> > > > >>>
> > > > >>> Interested to hear other thoughts.
> > > > >>
> > > > >> I don’t have any better solution for avoiding ABI issues for now.
> > > > >> The change is for avoiding ABI breakage. But we can drop this
> > > > >> patch For now as asym is still experimental.
> > > > >
> > > > > [Anoob] Having LIST_END would preclude new additions to asymmetric
> > algos?
> > > > If yes, then I would suggest we address it now.
> > > >
> > > > Not at all - but it can be problematic, if two versions of DPDK
> > > > disagree with the value of LIST_END.
> > > >
> > > > > Looking at the "problematic changes", we only have 2-3 application
> > > > > & PMD changes. For unit test application, we could may be do
> > > > > something like,
> > > >
> > > > The essental functionality not that different, I am just not sure
> > > > that the verbosity below is helping.
> > > > What you are really trying to guard against is people using LIST_END
> > > > to size arrays.
> > >
> > > [Anoob] Our problem is application using LIST_END (which comes from library)
> > to determine the number of iterations for the loop. My suggestion is to modify
> > the UT such that, we could use RTE_DIM(types) (which comes from application)
> > to determine iterations of loop. This would solve the problem, right?
> >
> > The problem is not the application.
> > Are you asking the app to define DPDK types?
>
> [Anoob] I didn't understand how you concluded that.
Because you define a specific array in the test app.
> The app is supposed to test "n" asymmetric features supported by DPDK. Currently, it does that by looping from 0 to LIST_END which happens to give you the first n features. Now, if we add any new asymmetric feature, LIST_END value would change. Isn't that the very reason why we removed LIST_END from symmetric library and applications?
Yes
> Now coming to what I proposed, the app is supposed to test "n" asymmetric features. LIST_END helps in doing the loops. If we remove LIST_END, then application will not be in a position to do a loop. My suggestion is, we list the types that are supposed to be tested by the app, and let that array be used as feature list.
>
> PS: Just to reiterate, my proposal is just a local array which would hold DPDK defined RTE enum values for the features that would be tested by this app/function.
I am more concerned by the general case than the test app.
I think a function returning a number is more app-friendly.
> > > > > + enum rte_crypto_asym_op_type types[] = {
>
> >
> > The problem is in DPDK API. We must not suggest a size for enums.
>
> [Anoob] So agreed that LIST_END should be removed?
Yes
> > If we really need a size, then it must be explicit and updated in the lib binary
> > (through a function) when the size increases.
>
> [Anoob] Precisely my thoughts. The loop with LIST_END done in application is not correct.
> >
> >
> >
> > > > > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > > > > + enum rte_crypto_asym_op_type types[] = {
> > > > > + RTE_CRYPTO_ASYM_OP_ENCRYPT,
> > > > > + RTE_CRYPTO_ASYM_OP_DECRYPT,
> > > > > + RTE_CRYPTO_ASYM_OP_SIGN,
> > > > > + RTE_CRYPTO_ASYM_OP_VERIFY,
> > > > > + RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
> > > > > + RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE,
> > > > > + RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> > > > > + };
> > > > > + for (i = 0; i <= RTE_DIM(types); i++) {
> > > > > if (tc.modex.xform_type ==
> > RTE_CRYPTO_ASYM_XFORM_RSA) {
> > > > > - if (tc.rsa_data.op_type_flags & (1 << i)) {
> > > > > + if (tc.rsa_data.op_type_flags & (1
> > > > > + <<
> > > > > + types[i])) {
> > > > > if (tc.rsa_data.key_exp) {
> > > > > status = test_cryptodev_asym_op(
> > > > > &testsuite_params, &tc,
> > > > > - test_msg, sessionless, i,
> > > > > + test_msg,
> > > > > + sessionless, types[i],
> > > > > RTE_RSA_KEY_TYPE_EXP);
> > > > > }
> > > > > if (status)
> > > > > break;
> > > > > - if (tc.rsa_data.key_qt && (i ==
> > > > > + if (tc.rsa_data.key_qt &&
> > > > > + (types[i] ==
> > > > > RTE_CRYPTO_ASYM_OP_DECRYPT ||
> > > > > - i == RTE_CRYPTO_ASYM_OP_SIGN)) {
> > > > > + types[i]
> > > > > + ==
> > > > > + RTE_CRYPTO_ASYM_OP_SIGN)) {
> > > > > status = test_cryptodev_asym_op(
> > > > > &testsuite_params,
> > > > > - &tc, test_msg, sessionless, i,
> > > > > + &tc,
> > > > > + test_msg, sessionless, types[i],
> > > > > RTE_RSA_KET_TYPE_QT);
> > > > > }
> > > > > if (status)
> > > > >
> > > > > This way, application would only use the ones which it is designed
> > > > > to work
> > > > with. For QAT driver changes, we could have an overload if condition
> > > > (if alg == x
> > > > || alg = y || ...) to get the same effect.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 11:52 0% ` Thomas Monjalon
@ 2021-10-12 13:38 0% ` Anoob Joseph
2021-10-12 13:54 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-10-12 13:38 UTC (permalink / raw)
To: Thomas Monjalon, Kinsella, Ray, Akhil Goyal, dev
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh
Hi Thomas,
Please see inline.
Thanks,
Anoob
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, October 12, 2021 5:22 PM
> To: Kinsella, Ray <mdr@ashroe.eu>; Akhil Goyal <gakhil@marvell.com>;
> dev@dpdk.org; Anoob Joseph <anoobj@marvell.com>
> Cc: david.marchand@redhat.com; hemant.agrawal@nxp.com;
> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj Rottela
> <rnagadheeraj@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>;
> ciara.power@intel.com; Stephen Hemminger <stephen@networkplumber.org>;
> Yigit, Ferruh <ferruh.yigit@intel.com>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END
> enumerators
>
> 12/10/2021 13:34, Anoob Joseph:
> > From: Kinsella, Ray <mdr@ashroe.eu>
> > > On 12/10/2021 11:50, Anoob Joseph wrote:
> > > > From: Akhil Goyal <gakhil@marvell.com>
> > > >>> On 08/10/2021 21:45, Akhil Goyal wrote:
> > > >>>> Remove *_LIST_END enumerators from asymmetric crypto lib to
> > > >>>> avoid ABI breakage for every new addition in enums.
> > > >>>>
> > > >>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > >>>> ---
> > > >>>> - } else if (xform->xform_type >=
> > > >>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > > >>>> + } else if (xform->xform_type >
> RTE_CRYPTO_ASYM_XFORM_ECPM
> [...]
> > > >>>
> > > >>> So I am not sure that this is an improvement.
>
> Indeed, it is not an improvement.
>
> > > >>> The cryptodev issue we had, was that _LIST_END was being used to
> > > >>> size arrays.
> > > >>> And that broke when new algorithms got added. Is that an issue,
> > > >>> in this
> > > case?
> > > >>
> > > >> Yes we did this same exercise for symmetric crypto enums earlier.
> > > >> Asym enums were left as it was experimental at that point.
> > > >> They are still experimental, but thought of making this uniform
> > > >> throughout DPDK enums.
> > > >>
> > > >>>
> > > >>> I am not sure that swapping out _LIST_END, and then littering
> > > >>> the code with RTE_CRYPTO_ASYM_XFORM_ECPM and
> > > >>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an
> improvement
> > > >> here.
> > > >>>
> > > >>> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is not
> > > >>> better or worse, than
> > > RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> > > >>>
> > > >>> Interested to hear other thoughts.
> > > >>
> > > >> I don’t have any better solution for avoiding ABI issues for now.
> > > >> The change is for avoiding ABI breakage. But we can drop this
> > > >> patch For now as asym is still experimental.
> > > >
> > > > [Anoob] Having LIST_END would preclude new additions to asymmetric
> algos?
> > > If yes, then I would suggest we address it now.
> > >
> > > Not at all - but it can be problematic, if two versions of DPDK
> > > disagree with the value of LIST_END.
> > >
> > > > Looking at the "problematic changes", we only have 2-3 application
> > > > & PMD changes. For unit test application, we could may be do
> > > > something like,
> > >
> > > The essental functionality not that different, I am just not sure
> > > that the verbosity below is helping.
> > > What you are really trying to guard against is people using LIST_END
> > > to size arrays.
> >
> > [Anoob] Our problem is application using LIST_END (which comes from library)
> to determine the number of iterations for the loop. My suggestion is to modify
> the UT such that, we could use RTE_DIM(types) (which comes from application)
> to determine iterations of loop. This would solve the problem, right?
>
> The problem is not the application.
> Are you asking the app to define DPDK types?
[Anoob] I didn't understand how you concluded that. The app is supposed to test "n" asymmetric features supported by DPDK. Currently, it does that by looping from 0 to LIST_END which happens to give you the first n features. Now, if we add any new asymmetric feature, LIST_END value would change. Isn't that the very reason why we removed LIST_END from symmetric library and applications?
Now coming to what I proposed, the app is supposed to test "n" asymmetric features. LIST_END helps in doing the loops. If we remove LIST_END, then application will not be in a position to do a loop. My suggestion is, we list the types that are supposed to be tested by the app, and let that array be used as feature list.
PS: Just to reiterate, my proposal is just a local array which would hold DPDK defined RTE enum values for the features that would be tested by this app/function.
> > > > + enum rte_crypto_asym_op_type types[] = {
>
> The problem is in DPDK API. We must not suggest a size for enums.
[Anoob] So agreed that LIST_END should be removed?
> If we really need a size, then it must be explicit and updated in the lib binary
> (through a function) when the size increases.
[Anoob] Precisely my thoughts. The loop with LIST_END done in application is not correct.
>
>
>
> > > > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > > > + enum rte_crypto_asym_op_type types[] = {
> > > > + RTE_CRYPTO_ASYM_OP_ENCRYPT,
> > > > + RTE_CRYPTO_ASYM_OP_DECRYPT,
> > > > + RTE_CRYPTO_ASYM_OP_SIGN,
> > > > + RTE_CRYPTO_ASYM_OP_VERIFY,
> > > > + RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
> > > > + RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE,
> > > > + RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> > > > + };
> > > > + for (i = 0; i <= RTE_DIM(types); i++) {
> > > > if (tc.modex.xform_type ==
> RTE_CRYPTO_ASYM_XFORM_RSA) {
> > > > - if (tc.rsa_data.op_type_flags & (1 << i)) {
> > > > + if (tc.rsa_data.op_type_flags & (1
> > > > + <<
> > > > + types[i])) {
> > > > if (tc.rsa_data.key_exp) {
> > > > status = test_cryptodev_asym_op(
> > > > &testsuite_params, &tc,
> > > > - test_msg, sessionless, i,
> > > > + test_msg,
> > > > + sessionless, types[i],
> > > > RTE_RSA_KEY_TYPE_EXP);
> > > > }
> > > > if (status)
> > > > break;
> > > > - if (tc.rsa_data.key_qt && (i ==
> > > > + if (tc.rsa_data.key_qt &&
> > > > + (types[i] ==
> > > > RTE_CRYPTO_ASYM_OP_DECRYPT ||
> > > > - i == RTE_CRYPTO_ASYM_OP_SIGN)) {
> > > > + types[i]
> > > > + ==
> > > > + RTE_CRYPTO_ASYM_OP_SIGN)) {
> > > > status = test_cryptodev_asym_op(
> > > > &testsuite_params,
> > > > - &tc, test_msg, sessionless, i,
> > > > + &tc,
> > > > + test_msg, sessionless, types[i],
> > > > RTE_RSA_KET_TYPE_QT);
> > > > }
> > > > if (status)
> > > >
> > > > This way, application would only use the ones which it is designed
> > > > to work
> > > with. For QAT driver changes, we could have an overload if condition
> > > (if alg == x
> > > || alg = y || ...) to get the same effect.
>
>
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5] net: introduce IPv4 ihl and version fields
2021-10-04 12:13 4% ` [dpdk-dev] [PATCH v4] " Gregory Etelson
@ 2021-10-12 12:29 4% ` Gregory Etelson
1 sibling, 0 replies; 200+ results
From: Gregory Etelson @ 2021-10-12 12:29 UTC (permalink / raw)
To: dev, getelson
Cc: matan, rasland, olivier.matz, thomas, Bernard Iremonger, Ray Kinsella
RTE IPv4 header definition combines the `version' and `ihl' fields
into a single structure member.
This patch introduces dedicated structure members for both `version'
and `ihl' IPv4 fields. Separated header fields definitions allow to
create simplified code to match on the IHL value in a flow rule.
The original `version_ihl' structure member is kept for backward
compatibility.
The patch implements one of 2 announced changes to the
IPv4 header.
IPv4 header encodes fragment information into 16 bits field.
3 bits hold flags and remaining 13 bits are for fragment offset.
13 bits bit-field cannot be defined both for big and little endian
systems.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v2: Add dependency.
v3: Add comments.
v4: Update release notes.
v5: Remove deprecation notice.
Update the patch comment.
---
app/test/test_flow_classify.c | 8 ++++----
doc/guides/rel_notes/deprecation.rst | 6 ------
doc/guides/rel_notes/release_21_11.rst | 3 +++
lib/net/rte_ip.h | 16 +++++++++++++++-
4 files changed, 22 insertions(+), 11 deletions(-)
diff --git a/app/test/test_flow_classify.c b/app/test/test_flow_classify.c
index 951606f248..4f64be5357 100644
--- a/app/test/test_flow_classify.c
+++ b/app/test/test_flow_classify.c
@@ -95,7 +95,7 @@ static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = {
* dst mask 255.255.255.00 / udp src is 32 dst is 33 / end"
*/
static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = {
- { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0,
+ { { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_UDP, 0,
RTE_IPV4(2, 2, 2, 3), RTE_IPV4(2, 2, 2, 7)}
};
static const struct rte_flow_item_ipv4 ipv4_mask_24 = {
@@ -131,7 +131,7 @@ static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END,
* dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end"
*/
static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = {
- { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0,
+ { { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_TCP, 0,
RTE_IPV4(1, 2, 3, 4), RTE_IPV4(5, 6, 7, 8)}
};
@@ -150,8 +150,8 @@ static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP,
* dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end"
*/
static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = {
- { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, RTE_IPV4(11, 12, 13, 14),
- RTE_IPV4(15, 16, 17, 18)}
+ { { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0,
+ RTE_IPV4(11, 12, 13, 14), RTE_IPV4(15, 16, 17, 18)}
};
static struct rte_flow_item_sctp sctp_spec_1 = {
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b..040f4a8868 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -167,12 +167,6 @@ Deprecation Notices
* net: ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets headers.
-* net: The structure ``rte_ipv4_hdr`` will have two unions.
- The first union is for existing ``version_ihl`` byte
- and new bitfield for version and IHL.
- The second union is for existing ``fragment_offset``
- and new bitfield for fragment flags and offset.
-
* vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
driver interface will be marked as internal in DPDK v21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..4fb4a1dac4 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -187,6 +187,9 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* net: Add ``version`` and ``ihl`` bit-fields to ``struct rte_ipv4_hdr``.
+ Existing ``version_ihl`` field was kept for backward compatibility.
+
ABI Changes
-----------
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index 05948b69b7..89a68d9433 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -38,7 +38,21 @@ extern "C" {
* IPv4 Header
*/
struct rte_ipv4_hdr {
- uint8_t version_ihl; /**< version and header length */
+ __extension__
+ union {
+ uint8_t version_ihl; /**< version and header length */
+ struct {
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+ uint8_t ihl:4; /**< header length */
+ uint8_t version:4; /**< version */
+#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ uint8_t version:4; /**< version */
+ uint8_t ihl:4; /**< header length */
+#else
+#error "setup endian definition"
+#endif
+ };
+ };
uint8_t type_of_service; /**< type of service */
rte_be16_t total_length; /**< length of packet */
rte_be16_t packet_id; /**< packet ID */
--
2.33.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 11:34 0% ` Anoob Joseph
@ 2021-10-12 11:52 0% ` Thomas Monjalon
2021-10-12 13:38 0% ` Anoob Joseph
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-12 11:52 UTC (permalink / raw)
To: Kinsella, Ray, Akhil Goyal, dev, Anoob Joseph
Cc: david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh
12/10/2021 13:34, Anoob Joseph:
> From: Kinsella, Ray <mdr@ashroe.eu>
> > On 12/10/2021 11:50, Anoob Joseph wrote:
> > > From: Akhil Goyal <gakhil@marvell.com>
> > >>> On 08/10/2021 21:45, Akhil Goyal wrote:
> > >>>> Remove *_LIST_END enumerators from asymmetric crypto lib to avoid
> > >>>> ABI breakage for every new addition in enums.
> > >>>>
> > >>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > >>>> ---
> > >>>> - } else if (xform->xform_type >=
> > >>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > >>>> + } else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
[...]
> > >>>
> > >>> So I am not sure that this is an improvement.
Indeed, it is not an improvement.
> > >>> The cryptodev issue we had, was that _LIST_END was being used to
> > >>> size arrays.
> > >>> And that broke when new algorithms got added. Is that an issue, in this
> > case?
> > >>
> > >> Yes we did this same exercise for symmetric crypto enums earlier.
> > >> Asym enums were left as it was experimental at that point.
> > >> They are still experimental, but thought of making this uniform
> > >> throughout DPDK enums.
> > >>
> > >>>
> > >>> I am not sure that swapping out _LIST_END, and then littering the
> > >>> code with RTE_CRYPTO_ASYM_XFORM_ECPM and
> > >>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an improvement
> > >> here.
> > >>>
> > >>> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is not
> > >>> better or worse, than
> > RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> > >>>
> > >>> Interested to hear other thoughts.
> > >>
> > >> I don’t have any better solution for avoiding ABI issues for now.
> > >> The change is for avoiding ABI breakage. But we can drop this patch
> > >> For now as asym is still experimental.
> > >
> > > [Anoob] Having LIST_END would preclude new additions to asymmetric algos?
> > If yes, then I would suggest we address it now.
> >
> > Not at all - but it can be problematic, if two versions of DPDK disagree with the
> > value of LIST_END.
> >
> > > Looking at the "problematic changes", we only have 2-3 application &
> > > PMD changes. For unit test application, we could may be do something
> > > like,
> >
> > The essental functionality not that different, I am just not sure that the verbosity
> > below is helping.
> > What you are really trying to guard against is people using LIST_END to size
> > arrays.
>
> [Anoob] Our problem is application using LIST_END (which comes from library) to determine the number of iterations for the loop. My suggestion is to modify the UT such that, we could use RTE_DIM(types) (which comes from application) to determine iterations of loop. This would solve the problem, right?
The problem is not the application.
Are you asking the app to define DPDK types?
The problem is in DPDK API. We must not suggest a size for enums.
If we really need a size, then it must be explicit and updated in the lib binary
(through a function) when the size increases.
> > > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > > + enum rte_crypto_asym_op_type types[] = {
> > > + RTE_CRYPTO_ASYM_OP_ENCRYPT,
> > > + RTE_CRYPTO_ASYM_OP_DECRYPT,
> > > + RTE_CRYPTO_ASYM_OP_SIGN,
> > > + RTE_CRYPTO_ASYM_OP_VERIFY,
> > > + RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
> > > + RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE,
> > > + RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> > > + };
> > > + for (i = 0; i <= RTE_DIM(types); i++) {
> > > if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
> > > - if (tc.rsa_data.op_type_flags & (1 << i)) {
> > > + if (tc.rsa_data.op_type_flags & (1 <<
> > > + types[i])) {
> > > if (tc.rsa_data.key_exp) {
> > > status = test_cryptodev_asym_op(
> > > &testsuite_params, &tc,
> > > - test_msg, sessionless, i,
> > > + test_msg,
> > > + sessionless, types[i],
> > > RTE_RSA_KEY_TYPE_EXP);
> > > }
> > > if (status)
> > > break;
> > > - if (tc.rsa_data.key_qt && (i ==
> > > + if (tc.rsa_data.key_qt &&
> > > + (types[i] ==
> > > RTE_CRYPTO_ASYM_OP_DECRYPT ||
> > > - i == RTE_CRYPTO_ASYM_OP_SIGN)) {
> > > + types[i] ==
> > > + RTE_CRYPTO_ASYM_OP_SIGN)) {
> > > status = test_cryptodev_asym_op(
> > > &testsuite_params,
> > > - &tc, test_msg, sessionless, i,
> > > + &tc, test_msg,
> > > + sessionless, types[i],
> > > RTE_RSA_KET_TYPE_QT);
> > > }
> > > if (status)
> > >
> > > This way, application would only use the ones which it is designed to work
> > with. For QAT driver changes, we could have an overload if condition (if alg == x
> > || alg = y || ...) to get the same effect.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 11:28 0% ` Kinsella, Ray
@ 2021-10-12 11:34 0% ` Anoob Joseph
2021-10-12 11:52 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-10-12 11:34 UTC (permalink / raw)
To: Kinsella, Ray, Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh
Hi Ray,
Please see inline.
Thanks,
Anoob
> -----Original Message-----
> From: Kinsella, Ray <mdr@ashroe.eu>
> Sent: Tuesday, October 12, 2021 4:58 PM
> To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <gakhil@marvell.com>;
> dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; pablo.de.lara.guarch@intel.com;
> fiona.trahe@intel.com; declan.doherty@intel.com; matan@nvidia.com;
> g.singh@nxp.com; roy.fan.zhang@intel.com; jianjay.zhou@huawei.com;
> asomalap@amd.com; ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj Rottela
> <rnagadheeraj@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>;
> ciara.power@intel.com; Stephen Hemminger <stephen@networkplumber.org>;
> Yigit, Ferruh <ferruh.yigit@intel.com>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END
> enumerators
>
>
>
> On 12/10/2021 11:50, Anoob Joseph wrote:
> > Hi Ray, Akhil,
> >
> > Please see inline.
> >
> > Thanks,
> > Anoob
> >
> >> -----Original Message-----
> >> From: Akhil Goyal <gakhil@marvell.com>
> >> Sent: Tuesday, October 12, 2021 3:49 PM
> >> To: Kinsella, Ray <mdr@ashroe.eu>; dev@dpdk.org
> >> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> >> hemant.agrawal@nxp.com; Anoob Joseph <anoobj@marvell.com>;
> >> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
> >> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
> >> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
> >> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
> >> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj
> >> Rottela <rnagadheeraj@marvell.com>; Ankur Dwivedi
> >> <adwivedi@marvell.com>; ciara.power@intel.com; Stephen Hemminger
> >> <stephen@networkplumber.org>; Yigit, Ferruh <ferruh.yigit@intel.com>
> >> Subject: RE: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove
> >> LIST_END enumerators
> >>
> >>>
> >>> On 08/10/2021 21:45, Akhil Goyal wrote:
> >>>> Remove *_LIST_END enumerators from asymmetric crypto lib to avoid
> >>>> ABI breakage for every new addition in enums.
> >>>>
> >>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> >>>> ---
> >>>> v2: no change
> >>>>
> >>>> app/test/test_cryptodev_asym.c | 4 ++--
> >>>> drivers/crypto/qat/qat_asym.c | 2 +-
> >>>> lib/cryptodev/rte_crypto_asym.h | 4 ----
> >>>> 3 files changed, 3 insertions(+), 7 deletions(-)
> >>>>
> >>>> diff --git a/app/test/test_cryptodev_asym.c
> >>> b/app/test/test_cryptodev_asym.c
> >>>> index 9d19a6d6d9..603b2e4609 100644
> >>>> --- a/app/test/test_cryptodev_asym.c
> >>>> +++ b/app/test/test_cryptodev_asym.c
> >>>> @@ -541,7 +541,7 @@ test_one_case(const void *test_case, int
> >>> sessionless)
> >>>> printf(" %u) TestCase %s %s\n", test_index++,
> >>>> tc.modex.description, test_msg);
> >>>> } else {
> >>>> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> >>>> + for (i = 0; i <=
> >>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
> >>>> if (tc.modex.xform_type ==
> >>> RTE_CRYPTO_ASYM_XFORM_RSA) {
> >>>> if (tc.rsa_data.op_type_flags & (1 << i)) {
> >>>> if (tc.rsa_data.key_exp) {
> >>>> @@ -1027,7 +1027,7 @@ static inline void print_asym_capa(
> >>>> rte_crypto_asym_xform_strings[capa->xform_type]);
> >>>> printf("operation supported -");
> >>>>
> >>>> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> >>>> + for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE;
> >>> i++) {
> >>>> /* check supported operations */
> >>>> if
> >>> (rte_cryptodev_asym_xform_capability_check_optype(capa, i))
> >>>> printf(" %s",
> >>>> diff --git a/drivers/crypto/qat/qat_asym.c
> >>>> b/drivers/crypto/qat/qat_asym.c index 85973812a8..026625a4d2 100644
> >>>> --- a/drivers/crypto/qat/qat_asym.c
> >>>> +++ b/drivers/crypto/qat/qat_asym.c
> >>>> @@ -742,7 +742,7 @@ qat_asym_session_configure(struct rte_cryptodev
> >>> *dev,
> >>>> err = -EINVAL;
> >>>> goto error;
> >>>> }
> >>>> - } else if (xform->xform_type >=
> >>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> >>>> + } else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
> >>>> || xform->xform_type <=
> >>> RTE_CRYPTO_ASYM_XFORM_NONE) {
> >>>> QAT_LOG(ERR, "Invalid asymmetric crypto xform");
> >>>> err = -EINVAL;
> >>>> diff --git a/lib/cryptodev/rte_crypto_asym.h
> >>> b/lib/cryptodev/rte_crypto_asym.h
> >>>> index 9c866f553f..5edf658572 100644
> >>>> --- a/lib/cryptodev/rte_crypto_asym.h
> >>>> +++ b/lib/cryptodev/rte_crypto_asym.h
> >>>> @@ -94,8 +94,6 @@ enum rte_crypto_asym_xform_type {
> >>>> */
> >>>> RTE_CRYPTO_ASYM_XFORM_ECPM,
> >>>> /**< Elliptic Curve Point Multiplication */
> >>>> - RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> >>>> - /**< End of list */
> >>>> };
> >>>>
> >>>> /**
> >>>> @@ -116,7 +114,6 @@ enum rte_crypto_asym_op_type {
> >>>> /**< DH Public Key generation operation */
> >>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> >>>> /**< DH Shared Secret compute operation */
> >>>> - RTE_CRYPTO_ASYM_OP_LIST_END
> >>>> };
> >>>>
> >>>> /**
> >>>> @@ -133,7 +130,6 @@ enum rte_crypto_rsa_padding_type {
> >>>> /**< RSA PKCS#1 OAEP padding scheme */
> >>>> RTE_CRYPTO_RSA_PADDING_PSS,
> >>>> /**< RSA PKCS#1 PSS padding scheme */
> >>>> - RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
> >>>> };
> >>>>
> >>>> /**
> >>>
> >>> So I am not sure that this is an improvement.
> >>> The cryptodev issue we had, was that _LIST_END was being used to
> >>> size arrays.
> >>> And that broke when new algorithms got added. Is that an issue, in this
> case?
> >>
> >> Yes we did this same exercise for symmetric crypto enums earlier.
> >> Asym enums were left as it was experimental at that point.
> >> They are still experimental, but thought of making this uniform
> >> throughout DPDK enums.
> >>
> >>>
> >>> I am not sure that swapping out _LIST_END, and then littering the
> >>> code with RTE_CRYPTO_ASYM_XFORM_ECPM and
> >>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an improvement
> >> here.
> >>>
> >>> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is not
> >>> better or worse, than
> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> >>>
> >>> Interested to hear other thoughts.
> >>
> >> I don’t have any better solution for avoiding ABI issues for now.
> >> The change is for avoiding ABI breakage. But we can drop this patch
> >> For now as asym is still experimental.
> >
> > [Anoob] Having LIST_END would preclude new additions to asymmetric algos?
> If yes, then I would suggest we address it now.
>
> Not at all - but it can be problematic, if two versions of DPDK disagree with the
> value of LIST_END.
>
> > Looking at the "problematic changes", we only have 2-3 application &
> > PMD changes. For unit test application, we could may be do something
> > like,
>
> The essental functionality not that different, I am just not sure that the verbosity
> below is helping.
> What you are really trying to guard against is people using LIST_END to size
> arrays.
[Anoob] Our problem is application using LIST_END (which comes from library) to determine the number of iterations for the loop. My suggestion is to modify the UT such that, we could use RTE_DIM(types) (which comes from application) to determine iterations of loop. This would solve the problem, right?
>
> >
> > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > + enum rte_crypto_asym_op_type types[] = {
> > + RTE_CRYPTO_ASYM_OP_ENCRYPT,
> > + RTE_CRYPTO_ASYM_OP_DECRYPT,
> > + RTE_CRYPTO_ASYM_OP_SIGN,
> > + RTE_CRYPTO_ASYM_OP_VERIFY,
> > + RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
> > + RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE,
> > + RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> > + };
> > + for (i = 0; i <= RTE_DIM(types); i++) {
> > if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
> > - if (tc.rsa_data.op_type_flags & (1 << i)) {
> > + if (tc.rsa_data.op_type_flags & (1 <<
> > + types[i])) {
> > if (tc.rsa_data.key_exp) {
> > status = test_cryptodev_asym_op(
> > &testsuite_params, &tc,
> > - test_msg, sessionless, i,
> > + test_msg,
> > + sessionless, types[i],
> > RTE_RSA_KEY_TYPE_EXP);
> > }
> > if (status)
> > break;
> > - if (tc.rsa_data.key_qt && (i ==
> > + if (tc.rsa_data.key_qt &&
> > + (types[i] ==
> > RTE_CRYPTO_ASYM_OP_DECRYPT ||
> > - i == RTE_CRYPTO_ASYM_OP_SIGN)) {
> > + types[i] ==
> > + RTE_CRYPTO_ASYM_OP_SIGN)) {
> > status = test_cryptodev_asym_op(
> > &testsuite_params,
> > - &tc, test_msg, sessionless, i,
> > + &tc, test_msg,
> > + sessionless, types[i],
> > RTE_RSA_KET_TYPE_QT);
> > }
> > if (status)
> >
> > This way, application would only use the ones which it is designed to work
> with. For QAT driver changes, we could have an overload if condition (if alg == x
> || alg = y || ...) to get the same effect.
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 10:50 0% ` Anoob Joseph
@ 2021-10-12 11:28 0% ` Kinsella, Ray
2021-10-12 11:34 0% ` Anoob Joseph
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-12 11:28 UTC (permalink / raw)
To: Anoob Joseph, Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh
On 12/10/2021 11:50, Anoob Joseph wrote:
> Hi Ray, Akhil,
>
> Please see inline.
>
> Thanks,
> Anoob
>
>> -----Original Message-----
>> From: Akhil Goyal <gakhil@marvell.com>
>> Sent: Tuesday, October 12, 2021 3:49 PM
>> To: Kinsella, Ray <mdr@ashroe.eu>; dev@dpdk.org
>> Cc: thomas@monjalon.net; david.marchand@redhat.com;
>> hemant.agrawal@nxp.com; Anoob Joseph <anoobj@marvell.com>;
>> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
>> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
>> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
>> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
>> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj Rottela
>> <rnagadheeraj@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>;
>> ciara.power@intel.com; Stephen Hemminger <stephen@networkplumber.org>;
>> Yigit, Ferruh <ferruh.yigit@intel.com>
>> Subject: RE: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END
>> enumerators
>>
>>>
>>> On 08/10/2021 21:45, Akhil Goyal wrote:
>>>> Remove *_LIST_END enumerators from asymmetric crypto lib to avoid
>>>> ABI breakage for every new addition in enums.
>>>>
>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>>> ---
>>>> v2: no change
>>>>
>>>> app/test/test_cryptodev_asym.c | 4 ++--
>>>> drivers/crypto/qat/qat_asym.c | 2 +-
>>>> lib/cryptodev/rte_crypto_asym.h | 4 ----
>>>> 3 files changed, 3 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/app/test/test_cryptodev_asym.c
>>> b/app/test/test_cryptodev_asym.c
>>>> index 9d19a6d6d9..603b2e4609 100644
>>>> --- a/app/test/test_cryptodev_asym.c
>>>> +++ b/app/test/test_cryptodev_asym.c
>>>> @@ -541,7 +541,7 @@ test_one_case(const void *test_case, int
>>> sessionless)
>>>> printf(" %u) TestCase %s %s\n", test_index++,
>>>> tc.modex.description, test_msg);
>>>> } else {
>>>> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
>>>> + for (i = 0; i <=
>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
>>>> if (tc.modex.xform_type ==
>>> RTE_CRYPTO_ASYM_XFORM_RSA) {
>>>> if (tc.rsa_data.op_type_flags & (1 << i)) {
>>>> if (tc.rsa_data.key_exp) {
>>>> @@ -1027,7 +1027,7 @@ static inline void print_asym_capa(
>>>> rte_crypto_asym_xform_strings[capa->xform_type]);
>>>> printf("operation supported -");
>>>>
>>>> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
>>>> + for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE;
>>> i++) {
>>>> /* check supported operations */
>>>> if
>>> (rte_cryptodev_asym_xform_capability_check_optype(capa, i))
>>>> printf(" %s",
>>>> diff --git a/drivers/crypto/qat/qat_asym.c
>>>> b/drivers/crypto/qat/qat_asym.c index 85973812a8..026625a4d2 100644
>>>> --- a/drivers/crypto/qat/qat_asym.c
>>>> +++ b/drivers/crypto/qat/qat_asym.c
>>>> @@ -742,7 +742,7 @@ qat_asym_session_configure(struct rte_cryptodev
>>> *dev,
>>>> err = -EINVAL;
>>>> goto error;
>>>> }
>>>> - } else if (xform->xform_type >=
>>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
>>>> + } else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
>>>> || xform->xform_type <=
>>> RTE_CRYPTO_ASYM_XFORM_NONE) {
>>>> QAT_LOG(ERR, "Invalid asymmetric crypto xform");
>>>> err = -EINVAL;
>>>> diff --git a/lib/cryptodev/rte_crypto_asym.h
>>> b/lib/cryptodev/rte_crypto_asym.h
>>>> index 9c866f553f..5edf658572 100644
>>>> --- a/lib/cryptodev/rte_crypto_asym.h
>>>> +++ b/lib/cryptodev/rte_crypto_asym.h
>>>> @@ -94,8 +94,6 @@ enum rte_crypto_asym_xform_type {
>>>> */
>>>> RTE_CRYPTO_ASYM_XFORM_ECPM,
>>>> /**< Elliptic Curve Point Multiplication */
>>>> - RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
>>>> - /**< End of list */
>>>> };
>>>>
>>>> /**
>>>> @@ -116,7 +114,6 @@ enum rte_crypto_asym_op_type {
>>>> /**< DH Public Key generation operation */
>>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
>>>> /**< DH Shared Secret compute operation */
>>>> - RTE_CRYPTO_ASYM_OP_LIST_END
>>>> };
>>>>
>>>> /**
>>>> @@ -133,7 +130,6 @@ enum rte_crypto_rsa_padding_type {
>>>> /**< RSA PKCS#1 OAEP padding scheme */
>>>> RTE_CRYPTO_RSA_PADDING_PSS,
>>>> /**< RSA PKCS#1 PSS padding scheme */
>>>> - RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
>>>> };
>>>>
>>>> /**
>>>
>>> So I am not sure that this is an improvement.
>>> The cryptodev issue we had, was that _LIST_END was being used to size
>>> arrays.
>>> And that broke when new algorithms got added. Is that an issue, in this case?
>>
>> Yes we did this same exercise for symmetric crypto enums earlier.
>> Asym enums were left as it was experimental at that point.
>> They are still experimental, but thought of making this uniform throughout DPDK
>> enums.
>>
>>>
>>> I am not sure that swapping out _LIST_END, and then littering the code
>>> with RTE_CRYPTO_ASYM_XFORM_ECPM and
>>> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an improvement
>> here.
>>>
>>> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is not
>>> better or worse, than RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
>>>
>>> Interested to hear other thoughts.
>>
>> I don’t have any better solution for avoiding ABI issues for now.
>> The change is for avoiding ABI breakage. But we can drop this patch For now as
>> asym is still experimental.
>
> [Anoob] Having LIST_END would preclude new additions to asymmetric algos? If yes, then I would suggest we address it now.
Not at all - but it can be problematic, if two versions of DPDK disagree with the value of LIST_END.
> Looking at the "problematic changes", we only have 2-3 application & PMD changes. For unit test application, we could may be do something like,
The essental functionality not that different, I am just not sure that the verbosity below is helping.
What you are really trying to guard against is people using LIST_END to size arrays.
>
> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> + enum rte_crypto_asym_op_type types[] = {
> + RTE_CRYPTO_ASYM_OP_ENCRYPT,
> + RTE_CRYPTO_ASYM_OP_DECRYPT,
> + RTE_CRYPTO_ASYM_OP_SIGN,
> + RTE_CRYPTO_ASYM_OP_VERIFY,
> + RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
> + RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE,
> + RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> + };
> + for (i = 0; i <= RTE_DIM(types); i++) {
> if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
> - if (tc.rsa_data.op_type_flags & (1 << i)) {
> + if (tc.rsa_data.op_type_flags & (1 << types[i])) {
> if (tc.rsa_data.key_exp) {
> status = test_cryptodev_asym_op(
> &testsuite_params, &tc,
> - test_msg, sessionless, i,
> + test_msg, sessionless, types[i],
> RTE_RSA_KEY_TYPE_EXP);
> }
> if (status)
> break;
> - if (tc.rsa_data.key_qt && (i ==
> + if (tc.rsa_data.key_qt && (types[i] ==
> RTE_CRYPTO_ASYM_OP_DECRYPT ||
> - i == RTE_CRYPTO_ASYM_OP_SIGN)) {
> + types[i] == RTE_CRYPTO_ASYM_OP_SIGN)) {
> status = test_cryptodev_asym_op(
> &testsuite_params,
> - &tc, test_msg, sessionless, i,
> + &tc, test_msg, sessionless, types[i],
> RTE_RSA_KET_TYPE_QT);
> }
> if (status)
>
> This way, application would only use the ones which it is designed to work with. For QAT driver changes, we could have an overload if condition (if alg == x || alg = y || ...) to get the same effect.
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 10:19 4% ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-10-12 10:50 0% ` Anoob Joseph
2021-10-12 11:28 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-10-12 10:50 UTC (permalink / raw)
To: Akhil Goyal, Kinsella, Ray, dev
Cc: thomas, david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, g.singh, roy.fan.zhang,
jianjay.zhou, asomalap, ruifeng.wang, konstantin.ananyev,
radu.nicolau, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
ciara.power, Stephen Hemminger, Yigit, Ferruh
Hi Ray, Akhil,
Please see inline.
Thanks,
Anoob
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Tuesday, October 12, 2021 3:49 PM
> To: Kinsella, Ray <mdr@ashroe.eu>; dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; Anoob Joseph <anoobj@marvell.com>;
> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com;
> declan.doherty@intel.com; matan@nvidia.com; g.singh@nxp.com;
> roy.fan.zhang@intel.com; jianjay.zhou@huawei.com; asomalap@amd.com;
> ruifeng.wang@arm.com; konstantin.ananyev@intel.com;
> radu.nicolau@intel.com; ajit.khaparde@broadcom.com; Nagadheeraj Rottela
> <rnagadheeraj@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>;
> ciara.power@intel.com; Stephen Hemminger <stephen@networkplumber.org>;
> Yigit, Ferruh <ferruh.yigit@intel.com>
> Subject: RE: [EXT] Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END
> enumerators
>
> >
> > On 08/10/2021 21:45, Akhil Goyal wrote:
> > > Remove *_LIST_END enumerators from asymmetric crypto lib to avoid
> > > ABI breakage for every new addition in enums.
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > ---
> > > v2: no change
> > >
> > > app/test/test_cryptodev_asym.c | 4 ++--
> > > drivers/crypto/qat/qat_asym.c | 2 +-
> > > lib/cryptodev/rte_crypto_asym.h | 4 ----
> > > 3 files changed, 3 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/app/test/test_cryptodev_asym.c
> > b/app/test/test_cryptodev_asym.c
> > > index 9d19a6d6d9..603b2e4609 100644
> > > --- a/app/test/test_cryptodev_asym.c
> > > +++ b/app/test/test_cryptodev_asym.c
> > > @@ -541,7 +541,7 @@ test_one_case(const void *test_case, int
> > sessionless)
> > > printf(" %u) TestCase %s %s\n", test_index++,
> > > tc.modex.description, test_msg);
> > > } else {
> > > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > > + for (i = 0; i <=
> > RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
> > > if (tc.modex.xform_type ==
> > RTE_CRYPTO_ASYM_XFORM_RSA) {
> > > if (tc.rsa_data.op_type_flags & (1 << i)) {
> > > if (tc.rsa_data.key_exp) {
> > > @@ -1027,7 +1027,7 @@ static inline void print_asym_capa(
> > > rte_crypto_asym_xform_strings[capa->xform_type]);
> > > printf("operation supported -");
> > >
> > > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > > + for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE;
> > i++) {
> > > /* check supported operations */
> > > if
> > (rte_cryptodev_asym_xform_capability_check_optype(capa, i))
> > > printf(" %s",
> > > diff --git a/drivers/crypto/qat/qat_asym.c
> > > b/drivers/crypto/qat/qat_asym.c index 85973812a8..026625a4d2 100644
> > > --- a/drivers/crypto/qat/qat_asym.c
> > > +++ b/drivers/crypto/qat/qat_asym.c
> > > @@ -742,7 +742,7 @@ qat_asym_session_configure(struct rte_cryptodev
> > *dev,
> > > err = -EINVAL;
> > > goto error;
> > > }
> > > - } else if (xform->xform_type >=
> > RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > > + } else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
> > > || xform->xform_type <=
> > RTE_CRYPTO_ASYM_XFORM_NONE) {
> > > QAT_LOG(ERR, "Invalid asymmetric crypto xform");
> > > err = -EINVAL;
> > > diff --git a/lib/cryptodev/rte_crypto_asym.h
> > b/lib/cryptodev/rte_crypto_asym.h
> > > index 9c866f553f..5edf658572 100644
> > > --- a/lib/cryptodev/rte_crypto_asym.h
> > > +++ b/lib/cryptodev/rte_crypto_asym.h
> > > @@ -94,8 +94,6 @@ enum rte_crypto_asym_xform_type {
> > > */
> > > RTE_CRYPTO_ASYM_XFORM_ECPM,
> > > /**< Elliptic Curve Point Multiplication */
> > > - RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > > - /**< End of list */
> > > };
> > >
> > > /**
> > > @@ -116,7 +114,6 @@ enum rte_crypto_asym_op_type {
> > > /**< DH Public Key generation operation */
> > > RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> > > /**< DH Shared Secret compute operation */
> > > - RTE_CRYPTO_ASYM_OP_LIST_END
> > > };
> > >
> > > /**
> > > @@ -133,7 +130,6 @@ enum rte_crypto_rsa_padding_type {
> > > /**< RSA PKCS#1 OAEP padding scheme */
> > > RTE_CRYPTO_RSA_PADDING_PSS,
> > > /**< RSA PKCS#1 PSS padding scheme */
> > > - RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
> > > };
> > >
> > > /**
> >
> > So I am not sure that this is an improvement.
> > The cryptodev issue we had, was that _LIST_END was being used to size
> > arrays.
> > And that broke when new algorithms got added. Is that an issue, in this case?
>
> Yes we did this same exercise for symmetric crypto enums earlier.
> Asym enums were left as it was experimental at that point.
> They are still experimental, but thought of making this uniform throughout DPDK
> enums.
>
> >
> > I am not sure that swapping out _LIST_END, and then littering the code
> > with RTE_CRYPTO_ASYM_XFORM_ECPM and
> > RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an improvement
> here.
> >
> > My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is not
> > better or worse, than RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
> >
> > Interested to hear other thoughts.
>
> I don’t have any better solution for avoiding ABI issues for now.
> The change is for avoiding ABI breakage. But we can drop this patch For now as
> asym is still experimental.
[Anoob] Having LIST_END would preclude new additions to asymmetric algos? If yes, then I would suggest we address it now.
Looking at the "problematic changes", we only have 2-3 application & PMD changes. For unit test application, we could may be do something like,
- for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
+ enum rte_crypto_asym_op_type types[] = {
+ RTE_CRYPTO_ASYM_OP_ENCRYPT,
+ RTE_CRYPTO_ASYM_OP_DECRYPT,
+ RTE_CRYPTO_ASYM_OP_SIGN,
+ RTE_CRYPTO_ASYM_OP_VERIFY,
+ RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
+ RTE_CRYPTO_ASYM_OP_PUBLIC_KEY_GENERATE,
+ RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
+ };
+ for (i = 0; i <= RTE_DIM(types); i++) {
if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
- if (tc.rsa_data.op_type_flags & (1 << i)) {
+ if (tc.rsa_data.op_type_flags & (1 << types[i])) {
if (tc.rsa_data.key_exp) {
status = test_cryptodev_asym_op(
&testsuite_params, &tc,
- test_msg, sessionless, i,
+ test_msg, sessionless, types[i],
RTE_RSA_KEY_TYPE_EXP);
}
if (status)
break;
- if (tc.rsa_data.key_qt && (i ==
+ if (tc.rsa_data.key_qt && (types[i] ==
RTE_CRYPTO_ASYM_OP_DECRYPT ||
- i == RTE_CRYPTO_ASYM_OP_SIGN)) {
+ types[i] == RTE_CRYPTO_ASYM_OP_SIGN)) {
status = test_cryptodev_asym_op(
&testsuite_params,
- &tc, test_msg, sessionless, i,
+ &tc, test_msg, sessionless, types[i],
RTE_RSA_KET_TYPE_QT);
}
if (status)
This way, application would only use the ones which it is designed to work with. For QAT driver changes, we could have an overload if condition (if alg == x || alg = y || ...) to get the same effect.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 1/5] ethdev: update modify field flow action
@ 2021-10-12 10:49 3% ` Viacheslav Ovsiienko
0 siblings, 0 replies; 200+ results
From: Viacheslav Ovsiienko @ 2021-10-12 10:49 UTC (permalink / raw)
To: dev; +Cc: rasland, matan, shahafs, orika, getelson, thomas
The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:
- immediate source can be presented either as an unsigned
64-bit integer or pointer to data pattern in memory.
There was no explicit pointer field defined in the union.
- the byte ordering for 64-bit integer was not specified.
Many fields have shorter lengths and byte ordering
is crucial.
- how the bit offset is applied to the immediate source
field was not defined and documented.
- 64-bit integer size is not enough to provide IPv6
addresses.
In order to cover the issues and exclude any ambiguities
the following is done:
- introduce the explicit pointer field
in rte_flow_action_modify_data structure
- replace the 64-bit unsigned integer with 16-byte array
- update the modify field flow action documentation
Appropriate deprecation notice has been removed.
[1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
Fixes: 2ba49b5f3721 ("doc: announce change to ethdev modify action data")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
doc/guides/prog_guide/rte_flow.rst | 24 +++++++++++++++++++++++-
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_21_11.rst | 7 +++++++
lib/ethdev/rte_flow.h | 16 ++++++++++++----
4 files changed, 42 insertions(+), 9 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..b08087511f 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2835,6 +2835,22 @@ a packet to any other part of it.
``value`` sets an immediate value to be used as a source or points to a
location of the value in memory. It is used instead of ``level`` and ``offset``
for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+The data in memory should be presented exactly in the same byte order and
+length as in the relevant flow item, i.e. data for field with type
+``RTE_FLOW_FIELD_MAC_DST`` should follow the conventions of ``dst`` field
+in ``rte_flow_item_eth`` structure, with type ``RTE_FLOW_FIELD_IPV6_SRC`` -
+``rte_flow_item_ipv6`` conventions, and so on. If the field size is larger than
+16 bytes the pattern can be provided as pointer only.
+
+The bitfield extracted from the memory being applied as second operation
+parameter is defined by action width and by the destination field offset.
+Application should provide the data in immediate value memory (either as
+buffer or by pointer) exactly as item field without any applied explicit offset,
+and destination packet field (with specified width and bit offset) will be
+replaced by immediate source bits from the same bit offset. For example,
+to replace the third byte of MAC address with value 0x85, application should
+specify destination width as 8, destination offset as 16, and provide immediate
+value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
.. _table_rte_flow_action_modify_field:
@@ -2865,7 +2881,13 @@ for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+---------------+----------------------------------------------------------+
| ``offset`` | number of bits to skip at the beginning |
+---------------+----------------------------------------------------------+
- | ``value`` | immediate value or a pointer to this value |
+ | ``value`` | immediate value buffer (source field only, not |
+ | | applicable to destination) for RTE_FLOW_FIELD_VALUE |
+ | | field type |
+ +---------------+----------------------------------------------------------+
+ | ``pvalue`` | pointer to immediate value data (source field only, not |
+ | | applicable to destination) for RTE_FLOW_FIELD_POINTER |
+ | | field type |
+---------------+----------------------------------------------------------+
Action: ``CONNTRACK``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b..dee14077a5 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -120,10 +120,6 @@ Deprecation Notices
* ethdev: Announce moving from dedicated modify function for each field,
to using the general ``rte_flow_modify_field`` action.
-* ethdev: The struct ``rte_flow_action_modify_data`` will be modified
- to support modifying fields larger than 64 bits.
- In addition, documentation will be updated to clarify byte order.
-
* ethdev: Attribute ``shared`` of the ``struct rte_flow_action_count``
is deprecated and will be removed in DPDK 21.11. Shared counters should
be managed using shared actions API (``rte_flow_shared_action_create`` etc).
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..578c1206e7 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -187,6 +187,13 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* ethdev: ``rte_flow_action_modify_data`` structure updated, immediate data
+ array is extended, data pointer field is explicitly added to union, the
+ action behavior is defined in more strict fashion and documentation updated.
+ The immediate value behavior has been changed, the entire immediate field
+ should be provided, and offset for immediate source bitfield is assigned
+ from destination one.
+
ABI Changes
-----------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..f14f77772b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3217,10 +3217,18 @@ struct rte_flow_action_modify_data {
uint32_t offset;
};
/**
- * Immediate value for RTE_FLOW_FIELD_VALUE or
- * memory address for RTE_FLOW_FIELD_POINTER.
+ * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
+ * same byte order and length as in relevant rte_flow_item_xxx.
+ * The immediate source bitfield offset is inherited from
+ * the destination's one.
*/
- uint64_t value;
+ uint8_t value[16];
+ /**
+ * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
+ * should be the same as for relevant field in the
+ * rte_flow_item_xxx structure.
+ */
+ void *pvalue;
};
};
@@ -3240,7 +3248,7 @@ enum rte_flow_modify_op {
* RTE_FLOW_ACTION_TYPE_MODIFY_FIELD
*
* Modify a destination header field according to the specified
- * operation. Another packet field can be used as a source as well
+ * operation. Another field of the packet can be used as a source as well
* as tag, mark, metadata, immediate value or a pointer to it.
*/
struct rte_flow_action_modify_field {
--
2.18.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
2021-10-12 10:05 3% ` Kinsella, Ray
@ 2021-10-12 10:29 0% ` Kundapura, Ganapati
0 siblings, 0 replies; 200+ results
From: Kundapura, Ganapati @ 2021-10-12 10:29 UTC (permalink / raw)
To: Kinsella, Ray, Jerin Jacob, Thomas Monjalon
Cc: David Marchand, dpdk-dev, Jayatheerthan, Jay
Hi,
> -----Original Message-----
> From: Kinsella, Ray <mdr@ashroe.eu>
> Sent: 12 October 2021 15:35
> To: Jerin Jacob <jerinjacobk@gmail.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Kundapura, Ganapati <ganapati.kundapura@intel.com>; David Marchand
> <david.marchand@redhat.com>; dpdk-dev <dev@dpdk.org>; Jayatheerthan, Jay
> <jay.jayatheerthan@intel.com>
> Subject: Re: [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
>
>
>
> On 12/10/2021 10:26, Jerin Jacob wrote:
> > On Tue, Oct 12, 2021 at 2:40 PM Thomas Monjalon <thomas@monjalon.net>
> wrote:
> >>
> >> 12/10/2021 10:47, Jerin Jacob:
> >>> On Tue, Oct 12, 2021 at 2:05 PM Kundapura, Ganapati
> >>> <ganapati.kundapura@intel.com> wrote:
> >>>> From: Jerin Jacob <jerinjacobk@gmail.com>
> >>>>>> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> >>>>>> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> >>>>>> @@ -216,6 +216,10 @@ struct rte_event_eth_rx_adapter_stats {
> >>>>>> /**< Eventdev enqueue count */
> >>>>>> uint64_t rx_enq_retry;
> >>>>>> /**< Eventdev enqueue retry count */
> >>>>>> + uint64_t rx_event_buf_count;
> >>>>>> + /**< Rx event buffered count */
> >>>>>> + uint64_t rx_event_buf_size;
> >>>>>
> >>>>>
> >>>>> Isn't ABI breakage? CI did not warn this. Isn't this a public structure?
> >>>> Please confirm if moving the above two members to end of the structure
> overcomes ABI breakage?
> >>>
> >>>
> >>> + @Ray Kinsella @Thomas Monjalon @David Marchand
> >>>
> >>> It will still break the ABI. IMO, Since it is an ABI breaking
> >>> release it is OK. If there are no other objections, Please move the
> >>> variable to end of the structure and update release notes for ABI
> >>> changes.
> >>
> >> Why moving since it breaks ABI anyway?
> >
> > There is no specific gain in keeping new additions in the middle of structure.
>
> 21.11 is an ABI breaking release, so move it where you like :-)
Posted new patch with the new struct members moved to the end of the struct,
updated release notes and review comments addressed.
>
> >> I think you can keep as is.
> >>
> >>
> >>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T
2021-10-11 11:29 5% ` [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-10-12 10:24 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-12 10:24 UTC (permalink / raw)
To: Nicolau, Radu, Ray Kinsella, Akhil Goyal, Doherty, Declan
Cc: dev, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
> Add support for specifying UDP port params for UDP encapsulation option.
> RFC3948 section-2.1 does not enforce using specific the UDP ports for
> UDP-Encapsulated ESP Header
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Acked-by: Anoob Joseph <anoobj@marvell.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 5 ++---
> doc/guides/rel_notes/release_21_11.rst | 5 +++++
> lib/security/rte_security.h | 7 +++++++
> 3 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 8b7b0beee2..d24d69b669 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -210,9 +210,8 @@ Deprecation Notices
> pointer for the private data to the application which can be attached
> to the packet while enqueuing.
>
> -* security: The structure ``rte_security_ipsec_xform`` will be extended with
> - multiple fields: source and destination port of UDP encapsulation,
> - IPsec payload MSS (Maximum Segment Size).
> +* security: The structure ``rte_security_ipsec_xform`` will be extended with:
> + new field: IPsec payload MSS (Maximum Segment Size).
>
> * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
> will be updated with new fields to support new features like IPsec inner
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 8ac6632abf..1a29640eea 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -238,6 +238,11 @@ ABI Changes
> application to start from an arbitrary ESN value for debug and SA lifetime
> enforcement purposes.
>
> +* security: A new structure ``udp`` was added in structure
> + ``rte_security_ipsec_xform`` to allow setting the source and destination ports
> + for UDP encapsulated IPsec traffic.
> +
> +
> Known Issues
> ------------
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 371d64647a..b30425e206 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -128,6 +128,11 @@ struct rte_security_ipsec_tunnel_param {
> };
> };
>
> +struct rte_security_ipsec_udp_param {
> + uint16_t sport;
> + uint16_t dport;
> +};
> +
> /**
> * IPsec Security Association option flags
> */
> @@ -288,6 +293,8 @@ struct rte_security_ipsec_xform {
> };
> } esn;
> /**< Extended Sequence Number */
> + struct rte_security_ipsec_udp_param udp;
> + /**< UDP parameters, ignored when udp_encap option not specified */
> };
>
> /**
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.25.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform
2021-10-11 11:29 5% ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-12 10:23 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-12 10:23 UTC (permalink / raw)
To: Nicolau, Radu, Ray Kinsella, Akhil Goyal, Doherty, Declan
Cc: dev, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
>
> Update ipsec_xform definition to include ESN field.
> This allows the application to control the ESN starting value.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Acked-by: Anoob Joseph <anoobj@marvell.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 2 +-
> doc/guides/rel_notes/release_21_11.rst | 4 ++++
> lib/security/rte_security.h | 8 ++++++++
> 3 files changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index baf15aa722..8b7b0beee2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -212,7 +212,7 @@ Deprecation Notices
>
> * security: The structure ``rte_security_ipsec_xform`` will be extended with
> multiple fields: source and destination port of UDP encapsulation,
> - IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
> + IPsec payload MSS (Maximum Segment Size).
>
> * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
> will be updated with new fields to support new features like IPsec inner
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index c0a7f75518..401c6d453a 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -229,6 +229,10 @@ ABI Changes
> ``rte_security_ipsec_xform`` to allow applications to configure SA soft
> and hard expiry limits. Limits can be either in number of packets or bytes.
>
> +* security: A new structure ``esn`` was added in structure
> + ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
> + application to start from an arbitrary ESN value for debug and SA lifetime
> + enforcement purposes.
>
> Known Issues
> ------------
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 2013e65e49..371d64647a 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -280,6 +280,14 @@ struct rte_security_ipsec_xform {
> /**< Anti replay window size to enable sequence replay attack handling.
> * replay checking is disabled if the window size is 0.
> */
> + union {
> + uint64_t value;
> + struct {
> + uint32_t low;
> + uint32_t hi;
> + };
> + } esn;
> + /**< Extended Sequence Number */
> };
>
> /**
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.25.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-12 9:55 3% ` Kinsella, Ray
@ 2021-10-12 10:19 4% ` Akhil Goyal
2021-10-12 10:50 0% ` Anoob Joseph
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-12 10:19 UTC (permalink / raw)
To: Kinsella, Ray, dev
Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde,
Nagadheeraj Rottela, Ankur Dwivedi, ciara.power,
Stephen Hemminger, Yigit, Ferruh
>
> On 08/10/2021 21:45, Akhil Goyal wrote:
> > Remove *_LIST_END enumerators from asymmetric crypto
> > lib to avoid ABI breakage for every new addition in
> > enums.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > v2: no change
> >
> > app/test/test_cryptodev_asym.c | 4 ++--
> > drivers/crypto/qat/qat_asym.c | 2 +-
> > lib/cryptodev/rte_crypto_asym.h | 4 ----
> > 3 files changed, 3 insertions(+), 7 deletions(-)
> >
> > diff --git a/app/test/test_cryptodev_asym.c
> b/app/test/test_cryptodev_asym.c
> > index 9d19a6d6d9..603b2e4609 100644
> > --- a/app/test/test_cryptodev_asym.c
> > +++ b/app/test/test_cryptodev_asym.c
> > @@ -541,7 +541,7 @@ test_one_case(const void *test_case, int
> sessionless)
> > printf(" %u) TestCase %s %s\n", test_index++,
> > tc.modex.description, test_msg);
> > } else {
> > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > + for (i = 0; i <=
> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
> > if (tc.modex.xform_type ==
> RTE_CRYPTO_ASYM_XFORM_RSA) {
> > if (tc.rsa_data.op_type_flags & (1 << i)) {
> > if (tc.rsa_data.key_exp) {
> > @@ -1027,7 +1027,7 @@ static inline void print_asym_capa(
> > rte_crypto_asym_xform_strings[capa->xform_type]);
> > printf("operation supported -");
> >
> > - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> > + for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE;
> i++) {
> > /* check supported operations */
> > if
> (rte_cryptodev_asym_xform_capability_check_optype(capa, i))
> > printf(" %s",
> > diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
> > index 85973812a8..026625a4d2 100644
> > --- a/drivers/crypto/qat/qat_asym.c
> > +++ b/drivers/crypto/qat/qat_asym.c
> > @@ -742,7 +742,7 @@ qat_asym_session_configure(struct rte_cryptodev
> *dev,
> > err = -EINVAL;
> > goto error;
> > }
> > - } else if (xform->xform_type >=
> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > + } else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
> > || xform->xform_type <=
> RTE_CRYPTO_ASYM_XFORM_NONE) {
> > QAT_LOG(ERR, "Invalid asymmetric crypto xform");
> > err = -EINVAL;
> > diff --git a/lib/cryptodev/rte_crypto_asym.h
> b/lib/cryptodev/rte_crypto_asym.h
> > index 9c866f553f..5edf658572 100644
> > --- a/lib/cryptodev/rte_crypto_asym.h
> > +++ b/lib/cryptodev/rte_crypto_asym.h
> > @@ -94,8 +94,6 @@ enum rte_crypto_asym_xform_type {
> > */
> > RTE_CRYPTO_ASYM_XFORM_ECPM,
> > /**< Elliptic Curve Point Multiplication */
> > - RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > - /**< End of list */
> > };
> >
> > /**
> > @@ -116,7 +114,6 @@ enum rte_crypto_asym_op_type {
> > /**< DH Public Key generation operation */
> > RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> > /**< DH Shared Secret compute operation */
> > - RTE_CRYPTO_ASYM_OP_LIST_END
> > };
> >
> > /**
> > @@ -133,7 +130,6 @@ enum rte_crypto_rsa_padding_type {
> > /**< RSA PKCS#1 OAEP padding scheme */
> > RTE_CRYPTO_RSA_PADDING_PSS,
> > /**< RSA PKCS#1 PSS padding scheme */
> > - RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
> > };
> >
> > /**
>
> So I am not sure that this is an improvement.
> The cryptodev issue we had, was that _LIST_END was being used to size
> arrays.
> And that broke when new algorithms got added. Is that an issue, in this case?
Yes we did this same exercise for symmetric crypto enums earlier.
Asym enums were left as it was experimental at that point.
They are still experimental, but thought of making this uniform
throughout DPDK enums.
>
> I am not sure that swapping out _LIST_END, and then littering the code with
> RTE_CRYPTO_ASYM_XFORM_ECPM and
> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an improvement
> here.
>
> My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is not
> better or worse,
> than RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
>
> Interested to hear other thoughts.
I don’t have any better solution for avoiding ABI issues for now.
The change is for avoiding ABI breakage. But we can drop this patch
For now as asym is still experimental.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
2021-10-12 9:26 0% ` Jerin Jacob
@ 2021-10-12 10:05 3% ` Kinsella, Ray
2021-10-12 10:29 0% ` Kundapura, Ganapati
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-12 10:05 UTC (permalink / raw)
To: Jerin Jacob, Thomas Monjalon
Cc: Kundapura, Ganapati, David Marchand, dpdk-dev, Jayatheerthan, Jay
On 12/10/2021 10:26, Jerin Jacob wrote:
> On Tue, Oct 12, 2021 at 2:40 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>
>> 12/10/2021 10:47, Jerin Jacob:
>>> On Tue, Oct 12, 2021 at 2:05 PM Kundapura, Ganapati
>>> <ganapati.kundapura@intel.com> wrote:
>>>> From: Jerin Jacob <jerinjacobk@gmail.com>
>>>>>> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
>>>>>> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
>>>>>> @@ -216,6 +216,10 @@ struct rte_event_eth_rx_adapter_stats {
>>>>>> /**< Eventdev enqueue count */
>>>>>> uint64_t rx_enq_retry;
>>>>>> /**< Eventdev enqueue retry count */
>>>>>> + uint64_t rx_event_buf_count;
>>>>>> + /**< Rx event buffered count */
>>>>>> + uint64_t rx_event_buf_size;
>>>>>
>>>>>
>>>>> Isn't ABI breakage? CI did not warn this. Isn't this a public structure?
>>>> Please confirm if moving the above two members to end of the structure overcomes ABI breakage?
>>>
>>>
>>> + @Ray Kinsella @Thomas Monjalon @David Marchand
>>>
>>> It will still break the ABI. IMO, Since it is an ABI breaking release
>>> it is OK. If there are no other objections, Please move the variable
>>> to end
>>> of the structure and update release notes for ABI changes.
>>
>> Why moving since it breaks ABI anyway?
>
> There is no specific gain in keeping new additions in the middle of structure.
21.11 is an ABI breaking release, so move it where you like :-)
>> I think you can keep as is.
>>
>>
>>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Akhil Goyal
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
2021-10-11 10:46 0% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Zhang, Roy Fan
@ 2021-10-12 9:55 3% ` Kinsella, Ray
2021-10-12 10:19 4% ` [dpdk-dev] [EXT] " Akhil Goyal
2 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-12 9:55 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Stephen Hemminger, Yigit, Ferruh
On 08/10/2021 21:45, Akhil Goyal wrote:
> Remove *_LIST_END enumerators from asymmetric crypto
> lib to avoid ABI breakage for every new addition in
> enums.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> v2: no change
>
> app/test/test_cryptodev_asym.c | 4 ++--
> drivers/crypto/qat/qat_asym.c | 2 +-
> lib/cryptodev/rte_crypto_asym.h | 4 ----
> 3 files changed, 3 insertions(+), 7 deletions(-)
>
> diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
> index 9d19a6d6d9..603b2e4609 100644
> --- a/app/test/test_cryptodev_asym.c
> +++ b/app/test/test_cryptodev_asym.c
> @@ -541,7 +541,7 @@ test_one_case(const void *test_case, int sessionless)
> printf(" %u) TestCase %s %s\n", test_index++,
> tc.modex.description, test_msg);
> } else {
> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> + for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
> if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
> if (tc.rsa_data.op_type_flags & (1 << i)) {
> if (tc.rsa_data.key_exp) {
> @@ -1027,7 +1027,7 @@ static inline void print_asym_capa(
> rte_crypto_asym_xform_strings[capa->xform_type]);
> printf("operation supported -");
>
> - for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
> + for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
> /* check supported operations */
> if (rte_cryptodev_asym_xform_capability_check_optype(capa, i))
> printf(" %s",
> diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
> index 85973812a8..026625a4d2 100644
> --- a/drivers/crypto/qat/qat_asym.c
> +++ b/drivers/crypto/qat/qat_asym.c
> @@ -742,7 +742,7 @@ qat_asym_session_configure(struct rte_cryptodev *dev,
> err = -EINVAL;
> goto error;
> }
> - } else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> + } else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
> || xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
> QAT_LOG(ERR, "Invalid asymmetric crypto xform");
> err = -EINVAL;
> diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
> index 9c866f553f..5edf658572 100644
> --- a/lib/cryptodev/rte_crypto_asym.h
> +++ b/lib/cryptodev/rte_crypto_asym.h
> @@ -94,8 +94,6 @@ enum rte_crypto_asym_xform_type {
> */
> RTE_CRYPTO_ASYM_XFORM_ECPM,
> /**< Elliptic Curve Point Multiplication */
> - RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> - /**< End of list */
> };
>
> /**
> @@ -116,7 +114,6 @@ enum rte_crypto_asym_op_type {
> /**< DH Public Key generation operation */
> RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
> /**< DH Shared Secret compute operation */
> - RTE_CRYPTO_ASYM_OP_LIST_END
> };
>
> /**
> @@ -133,7 +130,6 @@ enum rte_crypto_rsa_padding_type {
> /**< RSA PKCS#1 OAEP padding scheme */
> RTE_CRYPTO_RSA_PADDING_PSS,
> /**< RSA PKCS#1 PSS padding scheme */
> - RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
> };
>
> /**
So I am not sure that this is an improvement.
The cryptodev issue we had, was that _LIST_END was being used to size arrays.
And that broke when new algorithms got added. Is that an issue, in this case?
I am not sure that swapping out _LIST_END, and then littering the code with
RTE_CRYPTO_ASYM_XFORM_ECPM and RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE, is an improvement here.
My 2c is that from an ABI PoV RTE_CRYPTO_ASYM_OP_LIST_END is not better or worse,
than RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE?
Interested to hear other thoughts.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
2021-10-12 9:10 3% ` Thomas Monjalon
@ 2021-10-12 9:26 0% ` Jerin Jacob
2021-10-12 10:05 3% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-12 9:26 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Kundapura, Ganapati, Ray Kinsella, David Marchand, dpdk-dev,
Jayatheerthan, Jay
On Tue, Oct 12, 2021 at 2:40 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 12/10/2021 10:47, Jerin Jacob:
> > On Tue, Oct 12, 2021 at 2:05 PM Kundapura, Ganapati
> > <ganapati.kundapura@intel.com> wrote:
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> > > > > +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> > > > > @@ -216,6 +216,10 @@ struct rte_event_eth_rx_adapter_stats {
> > > > > /**< Eventdev enqueue count */
> > > > > uint64_t rx_enq_retry;
> > > > > /**< Eventdev enqueue retry count */
> > > > > + uint64_t rx_event_buf_count;
> > > > > + /**< Rx event buffered count */
> > > > > + uint64_t rx_event_buf_size;
> > > >
> > > >
> > > > Isn't ABI breakage? CI did not warn this. Isn't this a public structure?
> > > Please confirm if moving the above two members to end of the structure overcomes ABI breakage?
> >
> >
> > + @Ray Kinsella @Thomas Monjalon @David Marchand
> >
> > It will still break the ABI. IMO, Since it is an ABI breaking release
> > it is OK. If there are no other objections, Please move the variable
> > to end
> > of the structure and update release notes for ABI changes.
>
> Why moving since it breaks ABI anyway?
There is no specific gain in keeping new additions in the middle of structure.
> I think you can keep as is.
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
2021-10-12 8:47 4% ` Jerin Jacob
@ 2021-10-12 9:10 3% ` Thomas Monjalon
2021-10-12 9:26 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-12 9:10 UTC (permalink / raw)
To: Kundapura, Ganapati, Ray Kinsella, David Marchand, Jerin Jacob
Cc: dpdk-dev, Jayatheerthan, Jay
12/10/2021 10:47, Jerin Jacob:
> On Tue, Oct 12, 2021 at 2:05 PM Kundapura, Ganapati
> <ganapati.kundapura@intel.com> wrote:
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> > > > +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> > > > @@ -216,6 +216,10 @@ struct rte_event_eth_rx_adapter_stats {
> > > > /**< Eventdev enqueue count */
> > > > uint64_t rx_enq_retry;
> > > > /**< Eventdev enqueue retry count */
> > > > + uint64_t rx_event_buf_count;
> > > > + /**< Rx event buffered count */
> > > > + uint64_t rx_event_buf_size;
> > >
> > >
> > > Isn't ABI breakage? CI did not warn this. Isn't this a public structure?
> > Please confirm if moving the above two members to end of the structure overcomes ABI breakage?
>
>
> + @Ray Kinsella @Thomas Monjalon @David Marchand
>
> It will still break the ABI. IMO, Since it is an ABI breaking release
> it is OK. If there are no other objections, Please move the variable
> to end
> of the structure and update release notes for ABI changes.
Why moving since it breaks ABI anyway?
I think you can keep as is.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 3/3] security: add reserved bitfields
2021-10-12 6:59 0% ` Thomas Monjalon
@ 2021-10-12 8:53 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-10-12 8:53 UTC (permalink / raw)
To: dev
On 12/10/2021 07:59, Thomas Monjalon wrote:
> 11/10/2021 18:58, Akhil Goyal:
>>> 08/10/2021 22:45, Akhil Goyal:
>>>> In struct rte_security_ipsec_sa_options, for every new option
>>>> added, there is an ABI breakage, to avoid, a reserved_opts
>>>> bitfield is added to for the remaining bits available in the
>>>> structure.
>>>> Now for every new sa option, these reserved_opts can be reduced
>>>> and new option can be added.
>>>
>>> How do you make sure this field is initialized to 0?
>>>
>> Struct rte_security_ipsec_xform Is part of rte_security_capability as well
>> As a configuration structure in session create.
>> User, should ensure that if a device support that option(in capability), then
>> only these options will take into effect or else it will be don't care for the PMD.
>> The initial values of capabilities are set by PMD statically based on the features
>> that it support.
>> So if someone sets a bit in reserved_opts, it will work only if PMD support it
>> And sets the corresponding field in capabilities.
>> But yes, if a new field is added in future, and user sets the reserved_opts by mistake
>> And the PMD supports that feature as well, then that feature will be enabled.
>> This may or may not create issue depending on the feature which is enabled.
>>
>> Should I add a note in the comments to clarify that reserved_opts should be set as 0
>> And future releases may change this without notice(But reserved in itself suggest that)?
>> Adding an explicit check in session_create does not make sense to me.
>> What do you suggest?
>
> Yes at the minimum you should add a comment.
> You could also initialize it in the lib, but it is not always possible.
>
Provide a macro for initialization perhaps ... but there would be no way to enforce using it.
Ray K
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
2021-10-11 8:31 0% ` Thomas Monjalon
@ 2021-10-12 8:50 0% ` Kinsella, Ray
1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-10-12 8:50 UTC (permalink / raw)
To: dev
On 08/10/2021 21:45, Akhil Goyal wrote:
> In struct rte_security_ipsec_sa_options, for every new option
> added, there is an ABI breakage, to avoid, a reserved_opts
> bitfield is added to for the remaining bits available in the
> structure.
> Now for every new sa option, these reserved_opts can be reduced
> and new option can be added.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> v2: rebase and removed libabigail.abignore change.
> Exception may be added when there is a need for change.
>
> lib/security/rte_security.h | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 7eb9f109ae..c0ea13892e 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -258,6 +258,12 @@ struct rte_security_ipsec_sa_options {
> * PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
> */
> uint32_t l4_csum_enable : 1;
> +
> + /** Reserved bit fields for future extension
> + *
> + * Note: reduce number of bits in reserved_opts for every new option
> + */
> + uint32_t reserved_opts : 18;
> };
>
> /** IPSec security association direction */
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
2021-10-12 8:35 3% ` Kundapura, Ganapati
@ 2021-10-12 8:47 4% ` Jerin Jacob
2021-10-12 9:10 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-12 8:47 UTC (permalink / raw)
To: Kundapura, Ganapati, Ray Kinsella, Thomas Monjalon, David Marchand
Cc: dpdk-dev, Jayatheerthan, Jay
On Tue, Oct 12, 2021 at 2:05 PM Kundapura, Ganapati
<ganapati.kundapura@intel.com> wrote:
>
> Hi Jerin,
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: 11 October 2021 21:44
> > To: Kundapura, Ganapati <ganapati.kundapura@intel.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Jayatheerthan, Jay
> > > +que_id"); }
> > > diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h
> > > b/lib/eventdev/rte_event_eth_rx_adapter.h
> > > index 70ca427..acabed4 100644
> > > --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> > > +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> > > @@ -216,6 +216,10 @@ struct rte_event_eth_rx_adapter_stats {
> > > /**< Eventdev enqueue count */
> > > uint64_t rx_enq_retry;
> > > /**< Eventdev enqueue retry count */
> > > + uint64_t rx_event_buf_count;
> > > + /**< Rx event buffered count */
> > > + uint64_t rx_event_buf_size;
> >
> >
> > Isn't ABI breakage? CI did not warn this. Isn't this a public structure?
> Please confirm if moving the above two members to end of the structure overcomes ABI breakage?
+ @Ray Kinsella @Thomas Monjalon @David Marchand
It will still break the ABI. IMO, Since it is an ABI breaking release
it is OK. If there are no other objections, Please move the variable
to end
of the structure and update release notes for ABI changes.
> >
> >
> >
> > > + /**< Rx event buffer size */
> > > uint64_t rx_dropped;
> > > /**< Received packet dropped count */
> > > uint64_t rx_enq_start_ts;
> > > --
> > > 2.6.4
> > >
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
2021-10-11 16:14 3% ` Jerin Jacob
@ 2021-10-12 8:35 3% ` Kundapura, Ganapati
2021-10-12 8:47 4% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Kundapura, Ganapati @ 2021-10-12 8:35 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dpdk-dev, Jayatheerthan, Jay
Hi Jerin,
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: 11 October 2021 21:44
> To: Kundapura, Ganapati <ganapati.kundapura@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Jayatheerthan, Jay
> <jay.jayatheerthan@intel.com>
> Subject: Re: [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
>
> On Thu, Oct 7, 2021 at 6:27 PM Ganapati Kundapura
> <ganapati.kundapura@intel.com> wrote:
> >
> > Added telemetry callbacks to get Rx adapter stats, reset stats and to
> > get rx queue config information.
>
> rx -> Rx
>
> Change the subject to eventdev/rx_adapter
>
> >
> > Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
> >
> > diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c
> > b/lib/eventdev/rte_event_eth_rx_adapter.c
> > index 9ac976c..fa7191c 100644
> > --- a/lib/eventdev/rte_event_eth_rx_adapter.c
> > +++ b/lib/eventdev/rte_event_eth_rx_adapter.c
> > @@ -23,6 +23,7 @@
> > #include "eventdev_pmd.h"
> > #include "rte_eventdev_trace.h"
> > #include "rte_event_eth_rx_adapter.h"
> > +#include <rte_telemetry.h>
>
> Move this to the above block where all <...h> header files are grouped.
OK
>
>
> >
> > #define BATCH_SIZE 32
> > #define BLOCK_CNT_THRESHOLD 10
> > @@ -2852,6 +2853,7 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
> > struct rte_event_eth_rx_adapter_stats
> > *stats) {
> > struct rte_event_eth_rx_adapter *rx_adapter;
> > + struct rte_eth_event_enqueue_buffer *buf;
> > struct rte_event_eth_rx_adapter_stats dev_stats_sum = { 0 };
> > struct rte_event_eth_rx_adapter_stats dev_stats;
> > struct rte_eventdev *dev;
> > @@ -2887,8 +2889,11 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
> > if (rx_adapter->service_inited)
> > *stats = rx_adapter->stats;
> >
> > + buf = &rx_adapter->event_enqueue_buffer;
> > stats->rx_packets += dev_stats_sum.rx_packets;
> > stats->rx_enq_count += dev_stats_sum.rx_enq_count;
> > + stats->rx_event_buf_count = buf->count;
> > + stats->rx_event_buf_size = buf->events_size;
> >
> > return 0;
> > }
> > @@ -3052,3 +3057,146 @@
> > rte_event_eth_rx_adapter_queue_conf_get(uint8_t id,
> >
> > return 0;
> > }
> > +
> > +#define RXA_ADD_DICT(stats, s) rte_tel_data_add_dict_u64(d, #s,
> > +stats.s)
> > +
> > +static int
> > +handle_rxa_stats(const char *cmd __rte_unused,
> > + const char *params,
> > + struct rte_tel_data *d) {
> > + uint8_t rx_adapter_id;
> > + struct rte_event_eth_rx_adapter_stats rx_adptr_stats;
> > +
> > + if (params == NULL || strlen(params) == 0 || !isdigit(*params))
> > + return -1;
> > +
> > + /* Get Rx adapter ID from parameter string */
> > + rx_adapter_id = atoi(params);
> > + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(rx_adapter_id,
> > + -EINVAL);
> > +
> > + /* Get Rx adapter stats */
> > + if (rte_event_eth_rx_adapter_stats_get(rx_adapter_id,
> > + &rx_adptr_stats)) {
> > + RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats\n");
> > + return -1;
> > + }
> > +
> > + rte_tel_data_start_dict(d);
> > + rte_tel_data_add_dict_u64(d, "rx_adapter_id", rx_adapter_id);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_packets);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_poll_count);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_dropped);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_enq_retry);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_event_buf_count);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_event_buf_size);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_enq_count);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_enq_start_ts);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_enq_block_cycles);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_enq_end_ts);
> > + RXA_ADD_DICT(rx_adptr_stats, rx_intr_packets);
> > +
> > + return 0;
> > +}
> > +
> > +static int
> > +handle_rxa_stats_reset(const char *cmd __rte_unused,
> > + const char *params,
> > + struct rte_tel_data *d __rte_unused) {
> > + uint8_t rx_adapter_id;
> > +
> > + if (params == NULL || strlen(params) == 0 || ~isdigit(*params))
> > + return -1;
> > +
> > + /* Get Rx adapter ID from parameter string */
> > + rx_adapter_id = atoi(params);
> > + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(rx_adapter_id,
> > + -EINVAL);
> > +
> > + /* Reset Rx adapter stats */
> > + if (rte_event_eth_rx_adapter_stats_reset(rx_adapter_id)) {
> > + RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats\n");
> > + return -1;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int
> > +handle_rxa_get_queue_conf(const char *cmd __rte_unused,
> > + const char *params,
> > + struct rte_tel_data *d) {
> > + uint8_t rx_adapter_id;
> > + uint16_t rx_queue_id;
> > + int eth_dev_id;
> > + char *token, *l_params;
> > + struct rte_event_eth_rx_adapter_queue_conf queue_conf;
> > +
> > + if (params == NULL || strlen(params) == 0 || !isdigit(*params))
> > + return -1;
> > +
> > + /* Get Rx adapter ID from parameter string */
> > + l_params = strdup(params);
> > + token = strtok(l_params, ",");
> > + rx_adapter_id = strtoul(token, NULL, 10);
> > + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(rx_adapter_id,
> > + -EINVAL);
> > +
> > + token = strtok(NULL, ",");
> > + if (token == NULL || strlen(token) == 0 || !isdigit(*token))
> > + return -1;
> > +
> > + /* Get device ID from parameter string */
> > + eth_dev_id = strtoul(token, NULL, 10);
> > + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(eth_dev_id, -EINVAL);
> > +
> > + token = strtok(NULL, ",");
> > + if (token == NULL || strlen(token) == 0 || !isdigit(*token))
> > + return -1;
> > +
> > + /* Get Rx queue ID from parameter string */
> > + rx_queue_id = strtoul(token, NULL, 10);
> > + if (rx_queue_id >= rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
> > + RTE_EDEV_LOG_ERR("Invalid rx queue_id %u", rx_queue_id);
> > + return -EINVAL;
> > + }
> > +
> > + token = strtok(NULL, "\0");
> > + if (token != NULL)
> > + RTE_EDEV_LOG_ERR("Extra parameters passed to eventdev"
> > + " telemetry command, igrnoring");
> > +
> > + if (rte_event_eth_rx_adapter_queue_conf_get(rx_adapter_id,
> eth_dev_id,
> > + rx_queue_id, &queue_conf)) {
> > + RTE_EDEV_LOG_ERR("Failed to get Rx adapter queue config");
> > + return -1;
> > + }
> > +
> > + rte_tel_data_start_dict(d);
> > + rte_tel_data_add_dict_u64(d, "rx_adapter_id", rx_adapter_id);
> > + rte_tel_data_add_dict_u64(d, "eth_dev_id", eth_dev_id);
> > + rte_tel_data_add_dict_u64(d, "rx_queue_id", rx_queue_id);
> > + RXA_ADD_DICT(queue_conf, rx_queue_flags);
> > + RXA_ADD_DICT(queue_conf, servicing_weight);
> > + RXA_ADD_DICT(queue_conf.ev, queue_id);
> > + RXA_ADD_DICT(queue_conf.ev, sched_type);
> > + RXA_ADD_DICT(queue_conf.ev, priority);
> > + RXA_ADD_DICT(queue_conf.ev, flow_id);
> > +
> > + return 0;
> > +}
> > +
> > +RTE_INIT(rxa_init_telemetry)
> > +{
> > + rte_telemetry_register_cmd("/eventdev/rxa_stats",
> > + handle_rxa_stats,
> > + "Returns Rx adapter stats. Parameter: rx_adapter_id");
> > +
> > + rte_telemetry_register_cmd("/eventdev/rxa_stats_reset",
> > + handle_rxa_stats_reset,
> > + "Reset Rx adapter stats. Parameter: rx_adapter_id");
> > +
> > + rte_telemetry_register_cmd("/eventdev/rxa_queue_conf",
> > + handle_rxa_get_queue_conf,
> > + "Returns Rx queue config. Parameter: rxa_id, DevID,
> > +que_id"); }
> > diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h
> > b/lib/eventdev/rte_event_eth_rx_adapter.h
> > index 70ca427..acabed4 100644
> > --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> > +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> > @@ -216,6 +216,10 @@ struct rte_event_eth_rx_adapter_stats {
> > /**< Eventdev enqueue count */
> > uint64_t rx_enq_retry;
> > /**< Eventdev enqueue retry count */
> > + uint64_t rx_event_buf_count;
> > + /**< Rx event buffered count */
> > + uint64_t rx_event_buf_size;
>
>
> Isn't ABI breakage? CI did not warn this. Isn't this a public structure?
Please confirm if moving the above two members to end of the structure overcomes ABI breakage?
>
>
>
> > + /**< Rx event buffer size */
> > uint64_t rx_dropped;
> > /**< Received packet dropped count */
> > uint64_t rx_enq_start_ts;
> > --
> > 2.6.4
> >
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 3/3] security: add reserved bitfields
2021-10-11 22:15 3% ` Stephen Hemminger
@ 2021-10-12 8:31 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-10-12 8:31 UTC (permalink / raw)
To: Stephen Hemminger, Akhil Goyal
Cc: Thomas Monjalon, dev, david.marchand, hemant.agrawal,
Anoob Joseph, De Lara Guarch, Pablo, Trahe, Fiona, Doherty,
Declan, matan, g.singh, Zhang, Roy Fan, jianjay.zhou, asomalap,
ruifeng.wang, Ananyev, Konstantin, Nicolau, Radu, ajit.khaparde,
Nagadheeraj Rottela, Ankur Dwivedi, Power, Ciara, Richardson,
Bruce
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Monday 11 October 2021 23:16
> To: Akhil Goyal <gakhil@marvell.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>; dev@dpdk.org;
> david.marchand@redhat.com; hemant.agrawal@nxp.com; Anoob Joseph
> <anoobj@marvell.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com; Nagadheeraj
> Rottela <rnagadheeraj@marvell.com>; Ankur Dwivedi
> <adwivedi@marvell.com>; Power, Ciara <ciara.power@intel.com>; Kinsella,
> Ray <ray.kinsella@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> Subject: Re: [EXT] Re: [PATCH v2 3/3] security: add reserved bitfields
>
> On Mon, 11 Oct 2021 16:58:24 +0000
> Akhil Goyal <gakhil@marvell.com> wrote:
>
> > > 08/10/2021 22:45, Akhil Goyal:
> > > > In struct rte_security_ipsec_sa_options, for every new option
> > > > added, there is an ABI breakage, to avoid, a reserved_opts
> > > > bitfield is added to for the remaining bits available in the
> > > > structure.
> > > > Now for every new sa option, these reserved_opts can be reduced
> > > > and new option can be added.
> > >
> > > How do you make sure this field is initialized to 0?
> > >
> > Struct rte_security_ipsec_xform Is part of rte_security_capability as
> > well As a configuration structure in session create.
> > User, should ensure that if a device support that option(in
> > capability), then only these options will take into effect or else it
> will be don't care for the PMD.
> > The initial values of capabilities are set by PMD statically based on
> > the features that it support.
> > So if someone sets a bit in reserved_opts, it will work only if PMD
> > support it And sets the corresponding field in capabilities.
> > But yes, if a new field is added in future, and user sets the
> > reserved_opts by mistake And the PMD supports that feature as well,
> then that feature will be enabled.
> > This may or may not create issue depending on the feature which is
> enabled.
> >
> > Should I add a note in the comments to clarify that reserved_opts
> > should be set as 0 And future releases may change this without
> notice(But reserved in itself suggest that)?
> > Adding an explicit check in session_create does not make sense to me.
> > What do you suggest?
> >
> > Regards,
> > Akhil
> >
>
> The problem is if user creates an on stack variable and sets the
> unreserved fields to good values but other parts are garbage. This
> passes API/ABI unless you strictly enforce that all reserved fields are
> zero.
Right, but that is no better or worse than the current struct, in that respect, right?
User case be careless there also - declare it on the stack and forget to memset.
struct rte_security_ipsec_sa_options {
uint32_t esn : 1;
uint32_t udp_encap : 1;
uint32_t copy_dscp : 1;
uint32_t copy_flabel : 1;
uint32_t copy_df : 1;
uint32_t dec_ttl : 1;
uint32_t ecn : 1;
uint32_t stats : 1;
uint32_t iv_gen_disable : 1;
uint32_t tunnel_hdr_verify : 2;
};
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 1/5] ethdev: update modify field flow action
@ 2021-10-12 8:06 3% ` Viacheslav Ovsiienko
0 siblings, 0 replies; 200+ results
From: Viacheslav Ovsiienko @ 2021-10-12 8:06 UTC (permalink / raw)
To: dev; +Cc: rasland, matan, shahafs, orika, getelson, thomas
The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:
- immediate source can be presented either as an unsigned
64-bit integer or pointer to data pattern in memory.
There was no explicit pointer field defined in the union.
- the byte ordering for 64-bit integer was not specified.
Many fields have shorter lengths and byte ordering
is crucial.
- how the bit offset is applied to the immediate source
field was not defined and documented.
- 64-bit integer size is not enough to provide IPv6
addresses.
In order to cover the issues and exclude any ambiguities
the following is done:
- introduce the explicit pointer field
in rte_flow_action_modify_data structure
- replace the 64-bit unsigned integer with 16-byte array
- update the modify field flow action documentation
Appropriate deprecation notice has been removed.
[1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
Fixes: 2ba49b5f3721 ("doc: announce change to ethdev modify action data")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 24 +++++++++++++++++++++++-
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_21_11.rst | 7 +++++++
lib/ethdev/rte_flow.h | 16 ++++++++++++----
4 files changed, 42 insertions(+), 9 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..b08087511f 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2835,6 +2835,22 @@ a packet to any other part of it.
``value`` sets an immediate value to be used as a source or points to a
location of the value in memory. It is used instead of ``level`` and ``offset``
for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+The data in memory should be presented exactly in the same byte order and
+length as in the relevant flow item, i.e. data for field with type
+``RTE_FLOW_FIELD_MAC_DST`` should follow the conventions of ``dst`` field
+in ``rte_flow_item_eth`` structure, with type ``RTE_FLOW_FIELD_IPV6_SRC`` -
+``rte_flow_item_ipv6`` conventions, and so on. If the field size is larger than
+16 bytes the pattern can be provided as pointer only.
+
+The bitfield extracted from the memory being applied as second operation
+parameter is defined by action width and by the destination field offset.
+Application should provide the data in immediate value memory (either as
+buffer or by pointer) exactly as item field without any applied explicit offset,
+and destination packet field (with specified width and bit offset) will be
+replaced by immediate source bits from the same bit offset. For example,
+to replace the third byte of MAC address with value 0x85, application should
+specify destination width as 8, destination offset as 16, and provide immediate
+value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
.. _table_rte_flow_action_modify_field:
@@ -2865,7 +2881,13 @@ for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+---------------+----------------------------------------------------------+
| ``offset`` | number of bits to skip at the beginning |
+---------------+----------------------------------------------------------+
- | ``value`` | immediate value or a pointer to this value |
+ | ``value`` | immediate value buffer (source field only, not |
+ | | applicable to destination) for RTE_FLOW_FIELD_VALUE |
+ | | field type |
+ +---------------+----------------------------------------------------------+
+ | ``pvalue`` | pointer to immediate value data (source field only, not |
+ | | applicable to destination) for RTE_FLOW_FIELD_POINTER |
+ | | field type |
+---------------+----------------------------------------------------------+
Action: ``CONNTRACK``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b..dee14077a5 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -120,10 +120,6 @@ Deprecation Notices
* ethdev: Announce moving from dedicated modify function for each field,
to using the general ``rte_flow_modify_field`` action.
-* ethdev: The struct ``rte_flow_action_modify_data`` will be modified
- to support modifying fields larger than 64 bits.
- In addition, documentation will be updated to clarify byte order.
-
* ethdev: Attribute ``shared`` of the ``struct rte_flow_action_count``
is deprecated and will be removed in DPDK 21.11. Shared counters should
be managed using shared actions API (``rte_flow_shared_action_create`` etc).
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..578c1206e7 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -187,6 +187,13 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* ethdev: ``rte_flow_action_modify_data`` structure updated, immediate data
+ array is extended, data pointer field is explicitly added to union, the
+ action behavior is defined in more strict fashion and documentation updated.
+ The immediate value behavior has been changed, the entire immediate field
+ should be provided, and offset for immediate source bitfield is assigned
+ from destination one.
+
ABI Changes
-----------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..f14f77772b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3217,10 +3217,18 @@ struct rte_flow_action_modify_data {
uint32_t offset;
};
/**
- * Immediate value for RTE_FLOW_FIELD_VALUE or
- * memory address for RTE_FLOW_FIELD_POINTER.
+ * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
+ * same byte order and length as in relevant rte_flow_item_xxx.
+ * The immediate source bitfield offset is inherited from
+ * the destination's one.
*/
- uint64_t value;
+ uint8_t value[16];
+ /**
+ * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
+ * should be the same as for relevant field in the
+ * rte_flow_item_xxx structure.
+ */
+ void *pvalue;
};
};
@@ -3240,7 +3248,7 @@ enum rte_flow_modify_op {
* RTE_FLOW_ACTION_TYPE_MODIFY_FIELD
*
* Modify a destination header field according to the specified
- * operation. Another packet field can be used as a source as well
+ * operation. Another field of the packet can be used as a source as well
* as tag, mark, metadata, immediate value or a pointer to it.
*/
struct rte_flow_action_modify_field {
--
2.18.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 3/3] security: add reserved bitfields
2021-10-11 16:58 0% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-11 22:15 3% ` Stephen Hemminger
@ 2021-10-12 6:59 0% ` Thomas Monjalon
2021-10-12 8:53 0% ` Kinsella, Ray
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-12 6:59 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, david.marchand, hemant.agrawal, Anoob Joseph,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde,
Nagadheeraj Rottela, Ankur Dwivedi, ciara.power,
Stephen Hemminger, ray.kinsella, bruce.richardson
11/10/2021 18:58, Akhil Goyal:
> > 08/10/2021 22:45, Akhil Goyal:
> > > In struct rte_security_ipsec_sa_options, for every new option
> > > added, there is an ABI breakage, to avoid, a reserved_opts
> > > bitfield is added to for the remaining bits available in the
> > > structure.
> > > Now for every new sa option, these reserved_opts can be reduced
> > > and new option can be added.
> >
> > How do you make sure this field is initialized to 0?
> >
> Struct rte_security_ipsec_xform Is part of rte_security_capability as well
> As a configuration structure in session create.
> User, should ensure that if a device support that option(in capability), then
> only these options will take into effect or else it will be don't care for the PMD.
> The initial values of capabilities are set by PMD statically based on the features
> that it support.
> So if someone sets a bit in reserved_opts, it will work only if PMD support it
> And sets the corresponding field in capabilities.
> But yes, if a new field is added in future, and user sets the reserved_opts by mistake
> And the PMD supports that feature as well, then that feature will be enabled.
> This may or may not create issue depending on the feature which is enabled.
>
> Should I add a note in the comments to clarify that reserved_opts should be set as 0
> And future releases may change this without notice(But reserved in itself suggest that)?
> Adding an explicit check in session_create does not make sense to me.
> What do you suggest?
Yes at the minimum you should add a comment.
You could also initialize it in the lib, but it is not always possible.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16 check
2021-10-11 20:38 0% ` Chautru, Nicolas
@ 2021-10-12 6:53 3% ` Thomas Monjalon
2021-10-12 16:36 4% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-12 6:53 UTC (permalink / raw)
To: Chautru, Nicolas; +Cc: gakhil, dev, trix, hemant.agrawal, Zhang, Mingshan
11/10/2021 22:38, Chautru, Nicolas:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 13/08/2021 18:51, Nicolas Chautru:
> > > Adding a missing operation when CRC16
> > > is being used for TB CRC check.
> > >
> > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > ---
> > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > @@ -84,6 +84,7 @@ API Changes
> > > Also, make sure to start the actual text at the margin.
> > > =======================================================
> > >
> > > +* bbdev: Added capability related to more comprehensive CRC options.
> >
> > That's not an API change, the enum symbols are the same.
> > Only enum values are changed so it impacts only ABI.
>
> Hi Thomas,
> How is that not a API change when new additional capability are exposed? Ie. new enums defined for new capabilities.
API change is when the app source code has to be updated.
ABI change is when the app binary has to be rebuilt.
> I think I see other similar cases in the same release notes " * cryptodev: ``RTE_CRYPTO_AEAD_LIST_END`` from ``enum rte_crypto_aead_algo ...".
I don't see this one.
> You know best, just checking the intent, maybe worth clarifying the guideline except in case this is just me.
Given my explanation above, how would you classify your change?
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3 2/4] mempool: add non-IO flag
2021-10-12 0:04 4% ` [dpdk-dev] [PATCH v3 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-12 3:37 0% ` Jerin Jacob
@ 2021-10-12 6:42 0% ` Andrew Rybchenko
1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-12 6:42 UTC (permalink / raw)
To: Dmitry Kozlyuk, dev; +Cc: Thomas Monjalon, Matan Azrad, Olivier Matz
On 10/12/21 3:04 AM, Dmitry Kozlyuk wrote:
> Mempool is a generic allocator that is not necessarily used for device
> IO operations and its memory for DMA. Add MEMPOOL_F_NON_IO flag to mark
> such mempools.
> Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html
>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 3 +++
> lib/mempool/rte_mempool.h | 4 ++++
> 2 files changed, 7 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 5036641842..dbabdc9759 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -208,6 +208,9 @@ API Changes
> the crypto/security operation. This field will be used to communicate
> events such as soft expiry with IPsec in lookaside mode.
>
> +* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
> + that objects from this pool will not be used for device IO (e.g. DMA).
> +
>
> ABI Changes
> -----------
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index e2bf40aa09..b48d9f89c2 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -262,6 +262,7 @@ struct rte_mempool {
> #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/
> #define MEMPOOL_F_POOL_CREATED 0x0010 /**< Internal: pool is created. */
> #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
> +#define MEMPOOL_F_NON_IO 0x0040 /**< Not used for device IO (DMA). */
Doesn't it imply MEMPOOL_F_NO_IOVA_CONTIG?
Shouldn't it reject mempool population with not RTE_BAD_IOVA
iova parameter?
I see that it is just a hint, but just trying to make
full picture consistent.
As the second thought: isn't iova==RTE_BAD_IOVA
sufficient as a hint?
>
> /**
> * @internal When debug is enabled, store some statistics.
> @@ -991,6 +992,9 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
> * "single-consumer". Otherwise, it is "multi-consumers".
> * - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
> * necessarily be contiguous in IO memory.
> + * - MEMPOOL_F_NON_IO: If set, the mempool is considered to be
> + * never used for device IO, i.e. for DMA operations.
> + * It's a hint to other components and does not affect the mempool behavior.
> * @return
> * The pointer to the new allocated mempool, on success. NULL on error
> * with rte_errno set appropriately. Possible rte_errno values include:
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
2021-10-11 23:06 0% ` Ananyev, Konstantin
@ 2021-10-12 5:47 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-12 5:47 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay
On 10/12/21 2:06 AM, Ananyev, Konstantin wrote:
>>>>> At queue configure stage always allocate space for maximum possible
>>>>> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
>>>>> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
>>>>> pointer to internal queue data without extra checking of current number
>>>>> of configured queues.
>>>>> That would help in future to hide rte_eth_dev and related structures.
>>>>> It means that from now on, each ethdev port will always consume:
>>>>> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
>>>>> bytes of memory for its queue pointers.
>>>>> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
>>>>>
>>>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>>>> ---
>>>>> lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
>>>>> 1 file changed, 9 insertions(+), 27 deletions(-)
>>>>>
>>>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>>>> index ed37f8871b..c8abda6dd7 100644
>>>>> --- a/lib/ethdev/rte_ethdev.c
>>>>> +++ b/lib/ethdev/rte_ethdev.c
>>>>> @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>>>>>
>>>>> if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
>>>>> dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
>>>>> - sizeof(dev->data->rx_queues[0]) * nb_queues,
>>>>> + sizeof(dev->data->rx_queues[0]) *
>>>>> + RTE_MAX_QUEUES_PER_PORT,
>>>>> RTE_CACHE_LINE_SIZE);
>>>>
>>>> Looking at it I have few questions:
>>>> 1. Why is nb_queues == 0 case kept as an exception? Yes,
>>>> strictly speaking it is not the problem of the patch,
>>>> DPDK will still segfault (non-debug build) if I
>>>> allocate Tx queues only but call rte_eth_rx_burst().
>>>
>>> eth_dev_rx_queue_config(.., nb_queues=0) is used in few places to clean-up things.
>>
>> No, as far as I know. For Tx only application (e.g. traffic generator)
>> it is 100% legal to configure with tx_queues=X, rx_queues=0.
>> The same is for Rx only application (e.g. packet capture).
>
> Yes, that is valid config for sure.
> I just pointed that simply ignoring 'nb_queues' value and
> always allocating space for max possible queues, i.e:
>
> eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> {
> ....
> - if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
> + if (dev->data->rx_queues == NULL) {
> wouldn't work, as right now nb_queues == 0 has extra special meaning -
> do final cleanup and free dev->data->rx_queues.
> But re-reading the text below, it seems that I misunderstood you
> and it probably wasn't your intention anyway.
>
>>
>>>
>>>> After reading the patch description I thought that
>>>> we're trying to address it.
>>>
>>> We do, though I can't see how we can address it in this patch.
>>> Though it is a good idea - I think I can add extra check in eth_dev_fp_ops_setup()
>>> or around and setup RX function pointers only when dev->data->rx_queues != NULL.
>>> Same for TX.
>>
>> You don't need to care about these pointers, if these arrays are
>> always allocated. See (3) below.
>>
>>>
>>>> 2. Why do we need to allocate memory dynamically?
>>>> Can we just make rx_queues an array of appropriate size?
>>>
>>> Pavan already asked same question.
>>> My answer to him:
>>> Yep we can, and yes it will simplify this peace of code.
>>> The main reason I decided no to do this change now -
>>> it will change layout of the_eth_dev_data structure.
>>> In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
>>> as much as possible to avoid any unforeseen performance and functional impacts.
>>> If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
>>> consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
>>> without worrying about ABI breakage
>>
>> Thanks a lot. Makes sense.
>>
>>>> May be wasting 512K unconditionally is too much.
>>>> 3. If wasting 512K is too much, I'd consider to move
>>>> allocation to eth_dev_get(). If
>>>
>>> Don't understand where 512KB came from.
>>
>> 32 port * 1024 queues * 2 types * 8 pointer size
>> if we allocate as in (2) above.
>>
>>> each ethdev port will always consume:
>>> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
>>> bytes of memory for its queue pointers.
>>> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
>>
>> IMHO it will be a bit nicer if queue pointers arrays are allocated
>> on device get if size is fixed. It is just a suggestion. If you
>> disagree, feel free to drop it.
>
> You mean - allocate these arrays somewhere at rte_eth_dev_allocate() path?
Yes, eth_dev_get() mentioned above is called from
rte_eth_dev_allocate().
> That sounds like an interesting idea, but seems too drastic to me at that stage.
Yes, of course, we can address it later.
>
>>
>>>>> if (dev->data->rx_queues == NULL) {
>>>>> dev->data->nb_rx_queues = 0;
>>>>> @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>>>>>
>>>>> rxq = dev->data->rx_queues;
>>>>>
>>>>> - for (i = nb_queues; i < old_nb_queues; i++)
>>>>> + for (i = nb_queues; i < old_nb_queues; i++) {
>>>>> (*dev->dev_ops->rx_queue_release)(rxq[i]);
>>>>> - rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
>>>>> - RTE_CACHE_LINE_SIZE);
>>>>> - if (rxq == NULL)
>>>>> - return -(ENOMEM);
>>>>> - if (nb_queues > old_nb_queues) {
>>>>> - uint16_t new_qs = nb_queues - old_nb_queues;
>>>>> -
>>>>> - memset(rxq + old_nb_queues, 0,
>>>>> - sizeof(rxq[0]) * new_qs);
>>>>> + rxq[i] = NULL;
>>>>
>>>> It looks like the patch should be rebased on top of
>>>> next-net main because of queue release patches.
>>>>
>>>> [snip]
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 2/4] mempool: add non-IO flag
2021-10-12 0:04 4% ` [dpdk-dev] [PATCH v3 2/4] mempool: add non-IO flag Dmitry Kozlyuk
@ 2021-10-12 3:37 0% ` Jerin Jacob
2021-10-12 6:42 0% ` Andrew Rybchenko
1 sibling, 0 replies; 200+ results
From: Jerin Jacob @ 2021-10-12 3:37 UTC (permalink / raw)
To: Dmitry Kozlyuk
Cc: dpdk-dev, Thomas Monjalon, Matan Azrad, Olivier Matz, Andrew Rybchenko
On Tue, Oct 12, 2021 at 5:34 AM Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com> wrote:
>
> Mempool is a generic allocator that is not necessarily used for device
> IO operations and its memory for DMA. Add MEMPOOL_F_NON_IO flag to mark
> such mempools.
> Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html
>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 3 +++
> lib/mempool/rte_mempool.h | 4 ++++
> 2 files changed, 7 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 5036641842..dbabdc9759 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -208,6 +208,9 @@ API Changes
> the crypto/security operation. This field will be used to communicate
> events such as soft expiry with IPsec in lookaside mode.
>
> +* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
> + that objects from this pool will not be used for device IO (e.g. DMA).
> +
>
> ABI Changes
> -----------
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index e2bf40aa09..b48d9f89c2 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -262,6 +262,7 @@ struct rte_mempool {
> #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/
> #define MEMPOOL_F_POOL_CREATED 0x0010 /**< Internal: pool is created. */
> #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
> +#define MEMPOOL_F_NON_IO 0x0040 /**< Not used for device IO (DMA). */
Since it is the hint, How about changing the flag to MEMPOOL_F_HINT_NON_IO.
Otherwise, it looks good to me.
Acked-by: Jerin Jacob <jerinj@marvell.com>
>
> /**
> * @internal When debug is enabled, store some statistics.
> @@ -991,6 +992,9 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
> * "single-consumer". Otherwise, it is "multi-consumers".
> * - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
> * necessarily be contiguous in IO memory.
> + * - MEMPOOL_F_NON_IO: If set, the mempool is considered to be
> + * never used for device IO, i.e. for DMA operations.
> + * It's a hint to other components and does not affect the mempool behavior.
> * @return
> * The pointer to the new allocated mempool, on success. NULL on error
> * with rte_errno set appropriately. Possible rte_errno values include:
> --
> 2.25.1
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 2/4] mempool: add non-IO flag
@ 2021-10-12 0:04 4% ` Dmitry Kozlyuk
2021-10-12 3:37 0% ` Jerin Jacob
2021-10-12 6:42 0% ` Andrew Rybchenko
1 sibling, 2 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-12 0:04 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon, Matan Azrad, Olivier Matz, Andrew Rybchenko
Mempool is a generic allocator that is not necessarily used for device
IO operations and its memory for DMA. Add MEMPOOL_F_NON_IO flag to mark
such mempools.
Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
doc/guides/rel_notes/release_21_11.rst | 3 +++
lib/mempool/rte_mempool.h | 4 ++++
2 files changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 5036641842..dbabdc9759 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -208,6 +208,9 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+ that objects from this pool will not be used for device IO (e.g. DMA).
+
ABI Changes
-----------
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index e2bf40aa09..b48d9f89c2 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -262,6 +262,7 @@ struct rte_mempool {
#define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/
#define MEMPOOL_F_POOL_CREATED 0x0010 /**< Internal: pool is created. */
#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+#define MEMPOOL_F_NON_IO 0x0040 /**< Not used for device IO (DMA). */
/**
* @internal When debug is enabled, store some statistics.
@@ -991,6 +992,9 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
* "single-consumer". Otherwise, it is "multi-consumers".
* - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
* necessarily be contiguous in IO memory.
+ * - MEMPOOL_F_NON_IO: If set, the mempool is considered to be
+ * never used for device IO, i.e. for DMA operations.
+ * It's a hint to other components and does not affect the mempool behavior.
* @return
* The pointer to the new allocated mempool, on success. NULL on error
* with rte_errno set appropriately. Possible rte_errno values include:
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
2021-10-11 17:15 0% ` Andrew Rybchenko
@ 2021-10-11 23:06 0% ` Ananyev, Konstantin
2021-10-12 5:47 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-11 23:06 UTC (permalink / raw)
To: Andrew Rybchenko, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay
> >>> At queue configure stage always allocate space for maximum possible
> >>> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> >>> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> >>> pointer to internal queue data without extra checking of current number
> >>> of configured queues.
> >>> That would help in future to hide rte_eth_dev and related structures.
> >>> It means that from now on, each ethdev port will always consume:
> >>> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> >>> bytes of memory for its queue pointers.
> >>> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
> >>>
> >>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >>> ---
> >>> lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
> >>> 1 file changed, 9 insertions(+), 27 deletions(-)
> >>>
> >>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >>> index ed37f8871b..c8abda6dd7 100644
> >>> --- a/lib/ethdev/rte_ethdev.c
> >>> +++ b/lib/ethdev/rte_ethdev.c
> >>> @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> >>>
> >>> if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
> >>> dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
> >>> - sizeof(dev->data->rx_queues[0]) * nb_queues,
> >>> + sizeof(dev->data->rx_queues[0]) *
> >>> + RTE_MAX_QUEUES_PER_PORT,
> >>> RTE_CACHE_LINE_SIZE);
> >>
> >> Looking at it I have few questions:
> >> 1. Why is nb_queues == 0 case kept as an exception? Yes,
> >> strictly speaking it is not the problem of the patch,
> >> DPDK will still segfault (non-debug build) if I
> >> allocate Tx queues only but call rte_eth_rx_burst().
> >
> > eth_dev_rx_queue_config(.., nb_queues=0) is used in few places to clean-up things.
>
> No, as far as I know. For Tx only application (e.g. traffic generator)
> it is 100% legal to configure with tx_queues=X, rx_queues=0.
> The same is for Rx only application (e.g. packet capture).
Yes, that is valid config for sure.
I just pointed that simply ignoring 'nb_queues' value and
always allocating space for max possible queues, i.e:
eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
{
....
- if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
+ if (dev->data->rx_queues == NULL) {
wouldn't work, as right now nb_queues == 0 has extra special meaning -
do final cleanup and free dev->data->rx_queues.
But re-reading the text below, it seems that I misunderstood you
and it probably wasn't your intention anyway.
>
> >
> >> After reading the patch description I thought that
> >> we're trying to address it.
> >
> > We do, though I can't see how we can address it in this patch.
> > Though it is a good idea - I think I can add extra check in eth_dev_fp_ops_setup()
> > or around and setup RX function pointers only when dev->data->rx_queues != NULL.
> > Same for TX.
>
> You don't need to care about these pointers, if these arrays are
> always allocated. See (3) below.
>
> >
> >> 2. Why do we need to allocate memory dynamically?
> >> Can we just make rx_queues an array of appropriate size?
> >
> > Pavan already asked same question.
> > My answer to him:
> > Yep we can, and yes it will simplify this peace of code.
> > The main reason I decided no to do this change now -
> > it will change layout of the_eth_dev_data structure.
> > In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
> > as much as possible to avoid any unforeseen performance and functional impacts.
> > If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
> > consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
> > without worrying about ABI breakage
>
> Thanks a lot. Makes sense.
>
> >> May be wasting 512K unconditionally is too much.
> >> 3. If wasting 512K is too much, I'd consider to move
> >> allocation to eth_dev_get(). If
> >
> > Don't understand where 512KB came from.
>
> 32 port * 1024 queues * 2 types * 8 pointer size
> if we allocate as in (2) above.
>
> > each ethdev port will always consume:
> > ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> > bytes of memory for its queue pointers.
> > With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
>
> IMHO it will be a bit nicer if queue pointers arrays are allocated
> on device get if size is fixed. It is just a suggestion. If you
> disagree, feel free to drop it.
You mean - allocate these arrays somewhere at rte_eth_dev_allocate() path?
That sounds like an interesting idea, but seems too drastic to me at that stage.
>
> >>> if (dev->data->rx_queues == NULL) {
> >>> dev->data->nb_rx_queues = 0;
> >>> @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> >>>
> >>> rxq = dev->data->rx_queues;
> >>>
> >>> - for (i = nb_queues; i < old_nb_queues; i++)
> >>> + for (i = nb_queues; i < old_nb_queues; i++) {
> >>> (*dev->dev_ops->rx_queue_release)(rxq[i]);
> >>> - rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
> >>> - RTE_CACHE_LINE_SIZE);
> >>> - if (rxq == NULL)
> >>> - return -(ENOMEM);
> >>> - if (nb_queues > old_nb_queues) {
> >>> - uint16_t new_qs = nb_queues - old_nb_queues;
> >>> -
> >>> - memset(rxq + old_nb_queues, 0,
> >>> - sizeof(rxq[0]) * new_qs);
> >>> + rxq[i] = NULL;
> >>
> >> It looks like the patch should be rebased on top of
> >> next-net main because of queue release patches.
> >>
> >> [snip]
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 3/3] security: add reserved bitfields
2021-10-11 16:58 0% ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-10-11 22:15 3% ` Stephen Hemminger
2021-10-12 8:31 0% ` Kinsella, Ray
2021-10-12 6:59 0% ` Thomas Monjalon
1 sibling, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-10-11 22:15 UTC (permalink / raw)
To: Akhil Goyal
Cc: Thomas Monjalon, dev, david.marchand, hemant.agrawal,
Anoob Joseph, pablo.de.lara.guarch, fiona.trahe, declan.doherty,
matan, g.singh, roy.fan.zhang, jianjay.zhou, asomalap,
ruifeng.wang, konstantin.ananyev, radu.nicolau, ajit.khaparde,
Nagadheeraj Rottela, Ankur Dwivedi, ciara.power, ray.kinsella,
bruce.richardson
On Mon, 11 Oct 2021 16:58:24 +0000
Akhil Goyal <gakhil@marvell.com> wrote:
> > 08/10/2021 22:45, Akhil Goyal:
> > > In struct rte_security_ipsec_sa_options, for every new option
> > > added, there is an ABI breakage, to avoid, a reserved_opts
> > > bitfield is added to for the remaining bits available in the
> > > structure.
> > > Now for every new sa option, these reserved_opts can be reduced
> > > and new option can be added.
> >
> > How do you make sure this field is initialized to 0?
> >
> Struct rte_security_ipsec_xform Is part of rte_security_capability as well
> As a configuration structure in session create.
> User, should ensure that if a device support that option(in capability), then
> only these options will take into effect or else it will be don't care for the PMD.
> The initial values of capabilities are set by PMD statically based on the features
> that it support.
> So if someone sets a bit in reserved_opts, it will work only if PMD support it
> And sets the corresponding field in capabilities.
> But yes, if a new field is added in future, and user sets the reserved_opts by mistake
> And the PMD supports that feature as well, then that feature will be enabled.
> This may or may not create issue depending on the feature which is enabled.
>
> Should I add a note in the comments to clarify that reserved_opts should be set as 0
> And future releases may change this without notice(But reserved in itself suggest that)?
> Adding an explicit check in session_create does not make sense to me.
> What do you suggest?
>
> Regards,
> Akhil
>
The problem is if user creates an on stack variable and sets the unreserved
fields to good values but other parts are garbage. This passes API/ABI unless
you strictly enforce that all reserved fields are zero.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16 check
2021-10-11 20:17 3% ` Thomas Monjalon
@ 2021-10-11 20:38 0% ` Chautru, Nicolas
2021-10-12 6:53 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2021-10-11 20:38 UTC (permalink / raw)
To: Thomas Monjalon, gakhil; +Cc: dev, trix, hemant.agrawal, Zhang, Mingshan
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, October 11, 2021 1:17 PM
> To: gakhil@marvell.com; Chautru, Nicolas <nicolas.chautru@intel.com>
> Cc: dev@dpdk.org; trix@redhat.com; hemant.agrawal@nxp.com; Zhang,
> Mingshan <mingshan.zhang@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16
> check
>
> 13/08/2021 18:51, Nicolas Chautru:
> > Adding a missing operation when CRC16
> > is being used for TB CRC check.
> >
> > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > ---
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -84,6 +84,7 @@ API Changes
> > Also, make sure to start the actual text at the margin.
> > =======================================================
> >
> > +* bbdev: Added capability related to more comprehensive CRC options.
>
> That's not an API change, the enum symbols are the same.
> Only enum values are changed so it impacts only ABI.
Hi Thomas,
How is that not a API change when new additional capability are exposed? Ie. new enums defined for new capabilities.
I think I see other similar cases in the same release notes " * cryptodev: ``RTE_CRYPTO_AEAD_LIST_END`` from ``enum rte_crypto_aead_algo ...".
You know best, just checking the intent, maybe worth clarifying the guideline except in case this is just me.
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16 check
@ 2021-10-11 20:17 3% ` Thomas Monjalon
2021-10-11 20:38 0% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-11 20:17 UTC (permalink / raw)
To: gakhil, Nicolas Chautru; +Cc: dev, trix, hemant.agrawal, mingshan.zhang
13/08/2021 18:51, Nicolas Chautru:
> Adding a missing operation when CRC16
> is being used for TB CRC check.
>
> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> ---
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -84,6 +84,7 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* bbdev: Added capability related to more comprehensive CRC options.
That's not an API change, the enum symbols are the same.
Only enum values are changed so it impacts only ABI.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
2021-10-11 16:52 0% ` Ananyev, Konstantin
@ 2021-10-11 17:22 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 17:22 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay
On 10/11/21 7:52 PM, Ananyev, Konstantin wrote:
>
>
>> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
>>> Copy public function pointers (rx_pkt_burst(), etc.) and related
>>> pointers to internal data from rte_eth_dev structure into a
>>> separate flat array. That array will remain in a public header.
>>> The intention here is to make rte_eth_dev and related structures internal.
>>> That should allow future possible changes to core eth_dev structures
>>> to be transparent to the user and help to avoid ABI/API breakages.
>>> The plan is to keep minimal part of data from rte_eth_dev public,
>>> so we still can use inline functions for fast-path calls
>>> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>>> The whole idea beyond this new schema:
>>> 1. PMDs keep to setup fast-path function pointers and related data
>>> inside rte_eth_dev struct in the same way they did it before.
>>> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>>> (for secondary process) we call eth_dev_fp_ops_setup, which
>>> copies these function and data pointers into rte_eth_fp_ops[port_id].
>>> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>>> we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>>> into some dummy values.
>>> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>>> flat array to call PMD specific functions.
>>> That approach should allow us to make rte_eth_devices[] private
>>> without introducing regression and help to avoid changes in drivers code.
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>
>> Overall LGTM, few nits below.
>>
>>> ---
>>> lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++
>>> lib/ethdev/ethdev_private.h | 7 +++++
>>> lib/ethdev/rte_ethdev.c | 27 ++++++++++++++++++
>>> lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>>> 4 files changed, 141 insertions(+)
>>>
>>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>>> index 012cf73ca2..3eeda6e9f9 100644
>>> --- a/lib/ethdev/ethdev_private.c
>>> +++ b/lib/ethdev/ethdev_private.c
>>> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>>> RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>>> return str == NULL ? -1 : 0;
>>> }
>>> +
>>> +static uint16_t
>>> +dummy_eth_rx_burst(__rte_unused void *rxq,
>>> + __rte_unused struct rte_mbuf **rx_pkts,
>>> + __rte_unused uint16_t nb_pkts)
>>> +{
>>> + RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>>
>> May be "unconfigured" -> "stopped" ? Or "non-started" ?
>
> Yes, it can be configured but not started.
> So 'not started' seems like a better wording here.
> Another option probably: 'not ready'.
> What people think?
Taking into account that some PMds would like to set dummy
pointers in some specifics conditions, I think "not ready"
is the best option here.
>
> ...
>
>>
>>> + rte_errno = ENOTSUP;
>>> + return 0;
>>> +}
>>> +
>>> +struct rte_eth_fp_ops {
>>> +
>>> + /**
>>> + * Rx fast-path functions and related data.
>>> + * 64-bit systems: occupies first 64B line
>>> + */
>>
>> As I understand the above comment is for a group of below
>> fields. If so, Doxygen annocation for member groups should
>> be used.
>
> Ok, and how to do it?
>
See [1]
[1] https://www.doxygen.nl/manual/grouping.html#memgroup
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
2021-10-11 16:25 3% ` Ananyev, Konstantin
@ 2021-10-11 17:15 0% ` Andrew Rybchenko
2021-10-11 23:06 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-10-11 17:15 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay
On 10/11/21 7:25 PM, Ananyev, Konstantin wrote:
>
>
>>> At queue configure stage always allocate space for maximum possible
>>> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
>>> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
>>> pointer to internal queue data without extra checking of current number
>>> of configured queues.
>>> That would help in future to hide rte_eth_dev and related structures.
>>> It means that from now on, each ethdev port will always consume:
>>> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
>>> bytes of memory for its queue pointers.
>>> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>> ---
>>> lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
>>> 1 file changed, 9 insertions(+), 27 deletions(-)
>>>
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>> index ed37f8871b..c8abda6dd7 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>>>
>>> if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
>>> dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
>>> - sizeof(dev->data->rx_queues[0]) * nb_queues,
>>> + sizeof(dev->data->rx_queues[0]) *
>>> + RTE_MAX_QUEUES_PER_PORT,
>>> RTE_CACHE_LINE_SIZE);
>>
>> Looking at it I have few questions:
>> 1. Why is nb_queues == 0 case kept as an exception? Yes,
>> strictly speaking it is not the problem of the patch,
>> DPDK will still segfault (non-debug build) if I
>> allocate Tx queues only but call rte_eth_rx_burst().
>
> eth_dev_rx_queue_config(.., nb_queues=0) is used in few places to clean-up things.
No, as far as I know. For Tx only application (e.g. traffic generator)
it is 100% legal to configure with tx_queues=X, rx_queues=0.
The same is for Rx only application (e.g. packet capture).
>
>> After reading the patch description I thought that
>> we're trying to address it.
>
> We do, though I can't see how we can address it in this patch.
> Though it is a good idea - I think I can add extra check in eth_dev_fp_ops_setup()
> or around and setup RX function pointers only when dev->data->rx_queues != NULL.
> Same for TX.
You don't need to care about these pointers, if these arrays are
always allocated. See (3) below.
>
>> 2. Why do we need to allocate memory dynamically?
>> Can we just make rx_queues an array of appropriate size?
>
> Pavan already asked same question.
> My answer to him:
> Yep we can, and yes it will simplify this peace of code.
> The main reason I decided no to do this change now -
> it will change layout of the_eth_dev_data structure.
> In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
> as much as possible to avoid any unforeseen performance and functional impacts.
> If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
> consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
> without worrying about ABI breakage
Thanks a lot. Makes sense.
>> May be wasting 512K unconditionally is too much.
>> 3. If wasting 512K is too much, I'd consider to move
>> allocation to eth_dev_get(). If
>
> Don't understand where 512KB came from.
32 port * 1024 queues * 2 types * 8 pointer size
if we allocate as in (2) above.
> each ethdev port will always consume:
> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> bytes of memory for its queue pointers.
> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
IMHO it will be a bit nicer if queue pointers arrays are allocated
on device get if size is fixed. It is just a suggestion. If you
disagree, feel free to drop it.
>>> if (dev->data->rx_queues == NULL) {
>>> dev->data->nb_rx_queues = 0;
>>> @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>>>
>>> rxq = dev->data->rx_queues;
>>>
>>> - for (i = nb_queues; i < old_nb_queues; i++)
>>> + for (i = nb_queues; i < old_nb_queues; i++) {
>>> (*dev->dev_ops->rx_queue_release)(rxq[i]);
>>> - rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
>>> - RTE_CACHE_LINE_SIZE);
>>> - if (rxq == NULL)
>>> - return -(ENOMEM);
>>> - if (nb_queues > old_nb_queues) {
>>> - uint16_t new_qs = nb_queues - old_nb_queues;
>>> -
>>> - memset(rxq + old_nb_queues, 0,
>>> - sizeof(rxq[0]) * new_qs);
>>> + rxq[i] = NULL;
>>
>> It looks like the patch should be rebased on top of
>> next-net main because of queue release patches.
>>
>> [snip]
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
2021-10-11 15:47 0% ` Ananyev, Konstantin
@ 2021-10-11 17:03 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 17:03 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay
On 10/11/21 6:47 PM, Ananyev, Konstantin wrote:
>
>>
>> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
>>> Rework fast-path ethdev functions to use rte_eth_fp_ops[].
>>> While it is an API/ABI breakage, this change is intended to be
>>> transparent for both users (no changes in user app is required) and
>>> PMD developers (no changes in PMD is required).
>>> One extra thing to note - RX/TX callback invocation will cause extra
>>> function call with these changes. That might cause some insignificant
>>> slowdown for code-path where RX/TX callbacks are heavily involved.
>>
>> I'm sorry for nit picking here and below:
>>
>> RX -> Rx, TX -> Tx everywhere above.
>>
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>> ---
>>> lib/ethdev/ethdev_private.c | 31 +++++
>>> lib/ethdev/rte_ethdev.h | 242 ++++++++++++++++++++++++++----------
>>> lib/ethdev/version.map | 3 +
>>> 3 files changed, 208 insertions(+), 68 deletions(-)
>>>
>>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>>> index 3eeda6e9f9..1222c6f84e 100644
>>> --- a/lib/ethdev/ethdev_private.c
>>> +++ b/lib/ethdev/ethdev_private.c
>>> @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>>> fpo->txq.data = dev->data->tx_queues;
>>> fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>>> }
>>> +
>>> +uint16_t
>>> +rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
>>> + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
>>> + void *opaque)
>>> +{
>>> + const struct rte_eth_rxtx_callback *cb = opaque;
>>> +
>>> + while (cb != NULL) {
>>> + nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
>>> + nb_pkts, cb->param);
>>> + cb = cb->next;
>>> + }
>>> +
>>> + return nb_rx;
>>> +}
>>> +
>>> +uint16_t
>>> +rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
>>> + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
>>> +{
>>> + const struct rte_eth_rxtx_callback *cb = opaque;
>>> +
>>> + while (cb != NULL) {
>>> + nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
>>> + cb->param);
>>> + cb = cb->next;
>>> + }
>>> +
>>> + return nb_pkts;
>>> +}
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index cdd16d6e57..c0e1a40681 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
>>>
>>> #include <rte_ethdev_core.h>
>>>
>>> +/**
>>> + * @internal
>>> + * Helper routine for eth driver rx_burst API.
>>
>> rx -> Rx
>>
>>> + * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
>>> + * Does necessary post-processing - invokes RX callbacks if any, etc.
>>
>> RX -> Rx
>>
>>> + *
>>> + * @param port_id
>>> + * The port identifier of the Ethernet device.
>>> + * @param queue_id
>>> + * The index of the receive queue from which to retrieve input packets.
>>
>> Isn't:
>> The index of the queue from which packets are received from?
>
> I copied it from comments from rte_eth_rx_burst().
> I suppose it is just two ways to say the same thing.
May be it is just my problem that I don't understand the
initial description.
>
>>
>>> + * @param rx_pkts
>>> + * The address of an array of pointers to *rte_mbuf* structures that
>>> + * have been retrieved from the device.
>>> + * @param nb_pkts
>>
>> Should be @param nb_rx
>
> Ack, will fix.
>
>>
>>> + * The number of packets that were retrieved from the device.
>>> + * @param nb_pkts
>>> + * The number of elements in *rx_pkts* array.
>>
>> @p should be used to refer to a paramter.
>
> To be more precise you are talking about:
> s/*rx_pkts*/@ rx_pkts/
s/"rx_pkts"/@p rx_pkts/
> ?
>
>>
>> The description does not help to understand why both nb_rx and
>> nb_pkts are necessary. Isn't nb_pkts >= nb_rx and nb_rx
>> sufficient?
>
> Nope, that's for callbacks call.
> Will update the comment.
Thanks.
>>> + * @param opaque
>>> + * Opaque pointer of RX queue callback related data.
>>
>> RX -> Rx
>>
>>> + *
>>> + * @return
>>> + * The number of packets effectively supplied to the *rx_pkts* array.
>>
>> @p should be used to refer to a parameter.
>>
>>> + */
>>> +uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
>>> + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
>>> + void *opaque);
>>> +
>>> /**
>>> *
>>> * Retrieve a burst of input packets from a receive queue of an Ethernet
>>> @@ -4995,23 +5022,37 @@ static inline uint16_t
>>> rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>>> struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
>>> {
>>> - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> uint16_t nb_rx;
>>> + struct rte_eth_fp_ops *p;
>>
>> p is typically a very bad name in a funcion with
>> many pointer variables etc. May be "fpo" as in previous
>> patch?
>>
>>> + void *cb, *qd;
>>
>> Please, avoid variable, expecially pointers, declaration in
>> one line.
>
> Here and in other places, I think local variable names and placement,
> is just a matter of personal preference.
Of course you can drop my notes as long as I'm alone.
I've started my comment from "Please" :)
May be I'm asking too much.
Also, I'm sorry, but I stricly against 'p' name since such
naming makes code harder to read.
>
>>
>> I'd suggest to use 'rxq' instead of 'qd'. The first paramter
>> of the rx_pkt_burst is 'rxq'.
>>
>> Also 'cb' seems to be used under RTE_ETHDEV_RXTX_CALLBACKS
>> only. If so, it could be unused variable warning if
>> RTE_ETHDEV_RXTX_CALLBACKS is not defined.
>
> Good point, will move it back under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.
>
>>> +
>>> +#ifdef RTE_ETHDEV_DEBUG_RX
>>> + if (port_id >= RTE_MAX_ETHPORTS ||
>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Invalid port_id=%u or queue_id=%u\n",
>>> + port_id, queue_id);
>>> + return 0;
>>> + }
>>> +#endif
>>> +
>>> + /* fetch pointer to queue data */
>>> + p = &rte_eth_fp_ops[port_id];
>>> + qd = p->rxq.data[queue_id];
>>>
>>> #ifdef RTE_ETHDEV_DEBUG_RX
>>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
>>> - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
>>>
>>> - if (queue_id >= dev->data->nb_rx_queues) {
>>> - RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
>>> + if (qd == NULL) {
>>> + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
>>
>> RX -> Rx
>>
>>> + queue_id, port_id);
>>> return 0;
>>> }
>>> #endif
>>> - nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
>>> - rx_pkts, nb_pkts);
>>> +
>>> + nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
>>>
>>> #ifdef RTE_ETHDEV_RXTX_CALLBACKS
>>> - struct rte_eth_rxtx_callback *cb;
>>>
>>> /* __ATOMIC_RELEASE memory order was used when the
>>> * call back was inserted into the list.
>>> @@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>>> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>>> * not required.
>>> */
>>> - cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
>>> - __ATOMIC_RELAXED);
>>> -
>>> - if (unlikely(cb != NULL)) {
>>> - do {
>>> - nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
>>> - nb_pkts, cb->param);
>>> - cb = cb->next;
>>> - } while (cb != NULL);
>>> - }
>>> + cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
>>> + if (unlikely(cb != NULL))
>>> + nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
>>> + nb_rx, nb_pkts, cb);
>>> #endif
>>>
>>> rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
>>> @@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>>> static inline int
>>> rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>>> {
>>> - struct rte_eth_dev *dev;
>>> + struct rte_eth_fp_ops *p;
>>> + void *qd;
>>
>> p -> fpo, qd -> rxq
>>
>>> +
>>> + if (port_id >= RTE_MAX_ETHPORTS ||
>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Invalid port_id=%u or queue_id=%u\n",
>>> + port_id, queue_id);
>>> + return -EINVAL;
>>> + }
>>> +
>>> + /* fetch pointer to queue data */
>>> + p = &rte_eth_fp_ops[port_id];
>>> + qd = p->rxq.data[queue_id];
>>>
>>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> - dev = &rte_eth_devices[port_id];
>>> - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
>>> - if (queue_id >= dev->data->nb_rx_queues ||
>>> - dev->data->rx_queues[queue_id] == NULL)
>>> + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
>>> + if (qd == NULL)
>>> return -EINVAL;
>>>
>>> - return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
>>> + return (int)(*p->rx_queue_count)(qd);
>>> }
>>>
>>> /**@{@name Rx hardware descriptor states
>>> @@ -5108,21 +5154,30 @@ static inline int
>>> rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
>>> uint16_t offset)
>>> {
>>> - struct rte_eth_dev *dev;
>>> - void *rxq;
>>> + struct rte_eth_fp_ops *p;
>>> + void *qd;
>>
>> p -> fpo, qd -> rxq
>>
>>>
>>> #ifdef RTE_ETHDEV_DEBUG_RX
>>> - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> + if (port_id >= RTE_MAX_ETHPORTS ||
>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Invalid port_id=%u or queue_id=%u\n",
>>> + port_id, queue_id);
>>> + return -EINVAL;
>>> + }
>>> #endif
>>> - dev = &rte_eth_devices[port_id];
>>> +
>>> + /* fetch pointer to queue data */
>>> + p = &rte_eth_fp_ops[port_id];
>>> + qd = p->rxq.data[queue_id];
>>> +
>>> #ifdef RTE_ETHDEV_DEBUG_RX
>>> - if (queue_id >= dev->data->nb_rx_queues)
>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> + if (qd == NULL)
>>> return -ENODEV;
>>> #endif
>>> - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
>>> - rxq = dev->data->rx_queues[queue_id];
>>> -
>>> - return (*dev->rx_descriptor_status)(rxq, offset);
>>> + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
>>> + return (*p->rx_descriptor_status)(qd, offset);
>>> }
>>>
>>> /**@{@name Tx hardware descriptor states
>>> @@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
>>> static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
>>> uint16_t queue_id, uint16_t offset)
>>> {
>>> - struct rte_eth_dev *dev;
>>> - void *txq;
>>> + struct rte_eth_fp_ops *p;
>>> + void *qd;
>>
>> p -> fpo, qd -> txq
>>
>>>
>>> #ifdef RTE_ETHDEV_DEBUG_TX
>>> - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> + if (port_id >= RTE_MAX_ETHPORTS ||
>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Invalid port_id=%u or queue_id=%u\n",
>>> + port_id, queue_id);
>>> + return -EINVAL;
>>> + }
>>> #endif
>>> - dev = &rte_eth_devices[port_id];
>>> +
>>> + /* fetch pointer to queue data */
>>> + p = &rte_eth_fp_ops[port_id];
>>> + qd = p->txq.data[queue_id];
>>> +
>>> #ifdef RTE_ETHDEV_DEBUG_TX
>>> - if (queue_id >= dev->data->nb_tx_queues)
>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> + if (qd == NULL)
>>> return -ENODEV;
>>> #endif
>>> - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
>>> - txq = dev->data->tx_queues[queue_id];
>>> -
>>> - return (*dev->tx_descriptor_status)(txq, offset);
>>> + RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
>>> + return (*p->tx_descriptor_status)(qd, offset);
>>> }
>>>
>>> +/**
>>> + * @internal
>>> + * Helper routine for eth driver tx_burst API.
>>> + * Should be called before entry PMD's rte_eth_tx_bulk implementation.
>>> + * Does necessary pre-processing - invokes TX callbacks if any, etc.
>>
>> TX -> Tx
>>
>>> + *
>>> + * @param port_id
>>> + * The port identifier of the Ethernet device.
>>> + * @param queue_id
>>> + * The index of the transmit queue through which output packets must be
>>> + * sent.
>>> + * @param tx_pkts
>>> + * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
>>> + * which contain the output packets.
>>
>> *nb_pkts* -> @p nb_pkts
>>
>>> + * @param nb_pkts
>>> + * The maximum number of packets to transmit.
>>> + * @return
>>> + * The number of output packets to transmit.
>>> + */
>>> +uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
>>> + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
>>> +
>>> /**
>>> * Send a burst of output packets on a transmit queue of an Ethernet device.
>>> *
>>> @@ -5256,20 +5342,34 @@ static inline uint16_t
>>> rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
>>> struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>>> {
>>> - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> + struct rte_eth_fp_ops *p;
>>> + void *cb, *qd;
>>
>> Same as above
>>
>>> +
>>> +#ifdef RTE_ETHDEV_DEBUG_TX
>>> + if (port_id >= RTE_MAX_ETHPORTS ||
>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Invalid port_id=%u or queue_id=%u\n",
>>> + port_id, queue_id);
>>> + return 0;
>>> + }
>>> +#endif
>>> +
>>> + /* fetch pointer to queue data */
>>> + p = &rte_eth_fp_ops[port_id];
>>> + qd = p->txq.data[queue_id];
>>>
>>> #ifdef RTE_ETHDEV_DEBUG_TX
>>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
>>> - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
>>>
>>> - if (queue_id >= dev->data->nb_tx_queues) {
>>> - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
>>> + if (qd == NULL) {
>>> + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
>>
>> TX -> Tx
>>
>>> + queue_id, port_id);
>>> return 0;
>>> }
>>> #endif
>>>
>>> #ifdef RTE_ETHDEV_RXTX_CALLBACKS
>>> - struct rte_eth_rxtx_callback *cb;
>>>
>>> /* __ATOMIC_RELEASE memory order was used when the
>>> * call back was inserted into the list.
>>> @@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
>>> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>>> * not required.
>>> */
>>> - cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
>>> - __ATOMIC_RELAXED);
>>> -
>>> - if (unlikely(cb != NULL)) {
>>> - do {
>>> - nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
>>> - cb->param);
>>> - cb = cb->next;
>>> - } while (cb != NULL);
>>> - }
>>> + cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
>>> + if (unlikely(cb != NULL))
>>> + nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
>>> + nb_pkts, cb);
>>> #endif
>>>
>>> - rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
>>> - nb_pkts);
>>> - return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
>>> + nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
>>> +
>>> + rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
>>> + return nb_pkts;
>>> }
>>>
>>> /**
>>> @@ -5354,31 +5449,42 @@ static inline uint16_t
>>> rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
>>> struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>>> {
>>> - struct rte_eth_dev *dev;
>>> + struct rte_eth_fp_ops *p;
>>> + void *qd;
>>
>> p->fpo, qd->txq
>>
>>>
>>> #ifdef RTE_ETHDEV_DEBUG_TX
>>> - if (!rte_eth_dev_is_valid_port(port_id)) {
>>> - RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
>>> + if (port_id >= RTE_MAX_ETHPORTS ||
>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Invalid port_id=%u or queue_id=%u\n",
>>> + port_id, queue_id);
>>> rte_errno = ENODEV;
>>> return 0;
>>> }
>>> #endif
>>>
>>> - dev = &rte_eth_devices[port_id];
>>> + /* fetch pointer to queue data */
>>> + p = &rte_eth_fp_ops[port_id];
>>> + qd = p->txq.data[queue_id];
>>>
>>> #ifdef RTE_ETHDEV_DEBUG_TX
>>> - if (queue_id >= dev->data->nb_tx_queues) {
>>> - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
>>> + if (!rte_eth_dev_is_valid_port(port_id)) {
>>> + RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
>>
>> TX -> Tx
>>
>>> + rte_errno = ENODEV;
>>> + return 0;
>>> + }
>>> + if (qd == NULL) {
>>> + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
>>
>> TX -> Tx
>>
>>> + queue_id, port_id);
>>> rte_errno = EINVAL;
>>> return 0;
>>> }
>>> #endif
>>>
>>> - if (!dev->tx_pkt_prepare)
>>> + if (!p->tx_pkt_prepare)
>>
>> Please, change it to compare vs NULL since you touch the line.
>> Just to be consistent with DPDK coding style and lines above.
>
> Ok, I am also fond of explicit comparisons :)
Thanks.
>
>
>>> return nb_pkts;
>>>
>>> - return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
>>> - tx_pkts, nb_pkts);
>>> + return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
>>> }
>>>
>>> #else
>>> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
>>> index 904bce6ea1..79e62dcf61 100644
>>> --- a/lib/ethdev/version.map
>>> +++ b/lib/ethdev/version.map
>>> @@ -7,6 +7,8 @@ DPDK_22 {
>>> rte_eth_allmulticast_disable;
>>> rte_eth_allmulticast_enable;
>>> rte_eth_allmulticast_get;
>>> + rte_eth_call_rx_callbacks;
>>> + rte_eth_call_tx_callbacks;
>>> rte_eth_dev_adjust_nb_rx_tx_desc;
>>> rte_eth_dev_callback_register;
>>> rte_eth_dev_callback_unregister;
>>> @@ -76,6 +78,7 @@ DPDK_22 {
>>> rte_eth_find_next_of;
>>> rte_eth_find_next_owned_by;
>>> rte_eth_find_next_sibling;
>>> + rte_eth_fp_ops;
>>> rte_eth_iterator_cleanup;
>>> rte_eth_iterator_init;
>>> rte_eth_iterator_next;
>>>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 3/3] security: add reserved bitfields
2021-10-11 8:31 0% ` Thomas Monjalon
@ 2021-10-11 16:58 0% ` Akhil Goyal
2021-10-11 22:15 3% ` Stephen Hemminger
2021-10-12 6:59 0% ` Thomas Monjalon
0 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2021-10-11 16:58 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, david.marchand, hemant.agrawal, Anoob Joseph,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde,
Nagadheeraj Rottela, Ankur Dwivedi, ciara.power,
Stephen Hemminger, ray.kinsella, bruce.richardson
> 08/10/2021 22:45, Akhil Goyal:
> > In struct rte_security_ipsec_sa_options, for every new option
> > added, there is an ABI breakage, to avoid, a reserved_opts
> > bitfield is added to for the remaining bits available in the
> > structure.
> > Now for every new sa option, these reserved_opts can be reduced
> > and new option can be added.
>
> How do you make sure this field is initialized to 0?
>
Struct rte_security_ipsec_xform Is part of rte_security_capability as well
As a configuration structure in session create.
User, should ensure that if a device support that option(in capability), then
only these options will take into effect or else it will be don't care for the PMD.
The initial values of capabilities are set by PMD statically based on the features
that it support.
So if someone sets a bit in reserved_opts, it will work only if PMD support it
And sets the corresponding field in capabilities.
But yes, if a new field is added in future, and user sets the reserved_opts by mistake
And the PMD supports that feature as well, then that feature will be enabled.
This may or may not create issue depending on the feature which is enabled.
Should I add a note in the comments to clarify that reserved_opts should be set as 0
And future releases may change this without notice(But reserved in itself suggest that)?
Adding an explicit check in session_create does not make sense to me.
What do you suggest?
Regards,
Akhil
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
2021-10-11 8:25 0% ` Andrew Rybchenko
@ 2021-10-11 16:52 0% ` Ananyev, Konstantin
2021-10-11 17:22 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-11 16:52 UTC (permalink / raw)
To: Andrew Rybchenko, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay
> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a
> > separate flat array. That array will remain in a public header.
> > The intention here is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev structures
> > to be transparent to the user and help to avoid ABI/API breakages.
> > The plan is to keep minimal part of data from rte_eth_dev public,
> > so we still can use inline functions for fast-path calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > The whole idea beyond this new schema:
> > 1. PMDs keep to setup fast-path function pointers and related data
> > inside rte_eth_dev struct in the same way they did it before.
> > 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
> > (for secondary process) we call eth_dev_fp_ops_setup, which
> > copies these function and data pointers into rte_eth_fp_ops[port_id].
> > 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
> > we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
> > into some dummy values.
> > 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
> > flat array to call PMD specific functions.
> > That approach should allow us to make rte_eth_devices[] private
> > without introducing regression and help to avoid changes in drivers code.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Overall LGTM, few nits below.
>
> > ---
> > lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++
> > lib/ethdev/ethdev_private.h | 7 +++++
> > lib/ethdev/rte_ethdev.c | 27 ++++++++++++++++++
> > lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
> > 4 files changed, 141 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 012cf73ca2..3eeda6e9f9 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> > RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> > return str == NULL ? -1 : 0;
> > }
> > +
> > +static uint16_t
> > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > + __rte_unused struct rte_mbuf **rx_pkts,
> > + __rte_unused uint16_t nb_pkts)
> > +{
> > + RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>
> May be "unconfigured" -> "stopped" ? Or "non-started" ?
Yes, it can be configured but not started.
So 'not started' seems like a better wording here.
Another option probably: 'not ready'.
What people think?
...
>
> > + rte_errno = ENOTSUP;
> > + return 0;
> > +}
> > +
> > +struct rte_eth_fp_ops {
> > +
> > + /**
> > + * Rx fast-path functions and related data.
> > + * 64-bit systems: occupies first 64B line
> > + */
>
> As I understand the above comment is for a group of below
> fields. If so, Doxygen annocation for member groups should
> be used.
Ok, and how to do it?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
@ 2021-10-11 16:25 3% ` Ananyev, Konstantin
2021-10-11 17:15 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-11 16:25 UTC (permalink / raw)
To: Andrew Rybchenko, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay
> > At queue configure stage always allocate space for maximum possible
> > number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> > That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> > pointer to internal queue data without extra checking of current number
> > of configured queues.
> > That would help in future to hide rte_eth_dev and related structures.
> > It means that from now on, each ethdev port will always consume:
> > ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> > bytes of memory for its queue pointers.
> > With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> > lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
> > 1 file changed, 9 insertions(+), 27 deletions(-)
> >
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index ed37f8871b..c8abda6dd7 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> >
> > if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
> > dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
> > - sizeof(dev->data->rx_queues[0]) * nb_queues,
> > + sizeof(dev->data->rx_queues[0]) *
> > + RTE_MAX_QUEUES_PER_PORT,
> > RTE_CACHE_LINE_SIZE);
>
> Looking at it I have few questions:
> 1. Why is nb_queues == 0 case kept as an exception? Yes,
> strictly speaking it is not the problem of the patch,
> DPDK will still segfault (non-debug build) if I
> allocate Tx queues only but call rte_eth_rx_burst().
eth_dev_rx_queue_config(.., nb_queues=0) is used in few places to clean-up things.
> After reading the patch description I thought that
> we're trying to address it.
We do, though I can't see how we can address it in this patch.
Though it is a good idea - I think I can add extra check in eth_dev_fp_ops_setup()
or around and setup RX function pointers only when dev->data->rx_queues != NULL.
Same for TX.
> 2. Why do we need to allocate memory dynamically?
> Can we just make rx_queues an array of appropriate size?
Pavan already asked same question.
My answer to him:
Yep we can, and yes it will simplify this peace of code.
The main reason I decided no to do this change now -
it will change layout of the_eth_dev_data structure.
In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
as much as possible to avoid any unforeseen performance and functional impacts.
If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
without worrying about ABI breakage
> May be wasting 512K unconditionally is too much.
> 3. If wasting 512K is too much, I'd consider to move
> allocation to eth_dev_get(). If
Don't understand where 512KB came from.
each ethdev port will always consume:
((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
bytes of memory for its queue pointers.
With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
> > if (dev->data->rx_queues == NULL) {
> > dev->data->nb_rx_queues = 0;
> > @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> >
> > rxq = dev->data->rx_queues;
> >
> > - for (i = nb_queues; i < old_nb_queues; i++)
> > + for (i = nb_queues; i < old_nb_queues; i++) {
> > (*dev->dev_ops->rx_queue_release)(rxq[i]);
> > - rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
> > - RTE_CACHE_LINE_SIZE);
> > - if (rxq == NULL)
> > - return -(ENOMEM);
> > - if (nb_queues > old_nb_queues) {
> > - uint16_t new_qs = nb_queues - old_nb_queues;
> > -
> > - memset(rxq + old_nb_queues, 0,
> > - sizeof(rxq[0]) * new_qs);
> > + rxq[i] = NULL;
>
> It looks like the patch should be rebased on top of
> next-net main because of queue release patches.
>
> [snip]
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1] eventdev/rx-adapter: add telemetry callbacks
@ 2021-10-11 16:14 3% ` Jerin Jacob
2021-10-12 8:35 3% ` Kundapura, Ganapati
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-11 16:14 UTC (permalink / raw)
To: Ganapati Kundapura; +Cc: dpdk-dev, Jayatheerthan, Jay
On Thu, Oct 7, 2021 at 6:27 PM Ganapati Kundapura
<ganapati.kundapura@intel.com> wrote:
>
> Added telemetry callbacks to get Rx adapter stats, reset stats and
> to get rx queue config information.
rx -> Rx
Change the subject to eventdev/rx_adapter
>
> Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
>
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
> index 9ac976c..fa7191c 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.c
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.c
> @@ -23,6 +23,7 @@
> #include "eventdev_pmd.h"
> #include "rte_eventdev_trace.h"
> #include "rte_event_eth_rx_adapter.h"
> +#include <rte_telemetry.h>
Move this to the above block where all <...h> header files are grouped.
>
> #define BATCH_SIZE 32
> #define BLOCK_CNT_THRESHOLD 10
> @@ -2852,6 +2853,7 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
> struct rte_event_eth_rx_adapter_stats *stats)
> {
> struct rte_event_eth_rx_adapter *rx_adapter;
> + struct rte_eth_event_enqueue_buffer *buf;
> struct rte_event_eth_rx_adapter_stats dev_stats_sum = { 0 };
> struct rte_event_eth_rx_adapter_stats dev_stats;
> struct rte_eventdev *dev;
> @@ -2887,8 +2889,11 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
> if (rx_adapter->service_inited)
> *stats = rx_adapter->stats;
>
> + buf = &rx_adapter->event_enqueue_buffer;
> stats->rx_packets += dev_stats_sum.rx_packets;
> stats->rx_enq_count += dev_stats_sum.rx_enq_count;
> + stats->rx_event_buf_count = buf->count;
> + stats->rx_event_buf_size = buf->events_size;
>
> return 0;
> }
> @@ -3052,3 +3057,146 @@ rte_event_eth_rx_adapter_queue_conf_get(uint8_t id,
>
> return 0;
> }
> +
> +#define RXA_ADD_DICT(stats, s) rte_tel_data_add_dict_u64(d, #s, stats.s)
> +
> +static int
> +handle_rxa_stats(const char *cmd __rte_unused,
> + const char *params,
> + struct rte_tel_data *d)
> +{
> + uint8_t rx_adapter_id;
> + struct rte_event_eth_rx_adapter_stats rx_adptr_stats;
> +
> + if (params == NULL || strlen(params) == 0 || !isdigit(*params))
> + return -1;
> +
> + /* Get Rx adapter ID from parameter string */
> + rx_adapter_id = atoi(params);
> + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(rx_adapter_id, -EINVAL);
> +
> + /* Get Rx adapter stats */
> + if (rte_event_eth_rx_adapter_stats_get(rx_adapter_id,
> + &rx_adptr_stats)) {
> + RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats\n");
> + return -1;
> + }
> +
> + rte_tel_data_start_dict(d);
> + rte_tel_data_add_dict_u64(d, "rx_adapter_id", rx_adapter_id);
> + RXA_ADD_DICT(rx_adptr_stats, rx_packets);
> + RXA_ADD_DICT(rx_adptr_stats, rx_poll_count);
> + RXA_ADD_DICT(rx_adptr_stats, rx_dropped);
> + RXA_ADD_DICT(rx_adptr_stats, rx_enq_retry);
> + RXA_ADD_DICT(rx_adptr_stats, rx_event_buf_count);
> + RXA_ADD_DICT(rx_adptr_stats, rx_event_buf_size);
> + RXA_ADD_DICT(rx_adptr_stats, rx_enq_count);
> + RXA_ADD_DICT(rx_adptr_stats, rx_enq_start_ts);
> + RXA_ADD_DICT(rx_adptr_stats, rx_enq_block_cycles);
> + RXA_ADD_DICT(rx_adptr_stats, rx_enq_end_ts);
> + RXA_ADD_DICT(rx_adptr_stats, rx_intr_packets);
> +
> + return 0;
> +}
> +
> +static int
> +handle_rxa_stats_reset(const char *cmd __rte_unused,
> + const char *params,
> + struct rte_tel_data *d __rte_unused)
> +{
> + uint8_t rx_adapter_id;
> +
> + if (params == NULL || strlen(params) == 0 || ~isdigit(*params))
> + return -1;
> +
> + /* Get Rx adapter ID from parameter string */
> + rx_adapter_id = atoi(params);
> + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(rx_adapter_id, -EINVAL);
> +
> + /* Reset Rx adapter stats */
> + if (rte_event_eth_rx_adapter_stats_reset(rx_adapter_id)) {
> + RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats\n");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +handle_rxa_get_queue_conf(const char *cmd __rte_unused,
> + const char *params,
> + struct rte_tel_data *d)
> +{
> + uint8_t rx_adapter_id;
> + uint16_t rx_queue_id;
> + int eth_dev_id;
> + char *token, *l_params;
> + struct rte_event_eth_rx_adapter_queue_conf queue_conf;
> +
> + if (params == NULL || strlen(params) == 0 || !isdigit(*params))
> + return -1;
> +
> + /* Get Rx adapter ID from parameter string */
> + l_params = strdup(params);
> + token = strtok(l_params, ",");
> + rx_adapter_id = strtoul(token, NULL, 10);
> + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(rx_adapter_id, -EINVAL);
> +
> + token = strtok(NULL, ",");
> + if (token == NULL || strlen(token) == 0 || !isdigit(*token))
> + return -1;
> +
> + /* Get device ID from parameter string */
> + eth_dev_id = strtoul(token, NULL, 10);
> + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(eth_dev_id, -EINVAL);
> +
> + token = strtok(NULL, ",");
> + if (token == NULL || strlen(token) == 0 || !isdigit(*token))
> + return -1;
> +
> + /* Get Rx queue ID from parameter string */
> + rx_queue_id = strtoul(token, NULL, 10);
> + if (rx_queue_id >= rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
> + RTE_EDEV_LOG_ERR("Invalid rx queue_id %u", rx_queue_id);
> + return -EINVAL;
> + }
> +
> + token = strtok(NULL, "\0");
> + if (token != NULL)
> + RTE_EDEV_LOG_ERR("Extra parameters passed to eventdev"
> + " telemetry command, igrnoring");
> +
> + if (rte_event_eth_rx_adapter_queue_conf_get(rx_adapter_id, eth_dev_id,
> + rx_queue_id, &queue_conf)) {
> + RTE_EDEV_LOG_ERR("Failed to get Rx adapter queue config");
> + return -1;
> + }
> +
> + rte_tel_data_start_dict(d);
> + rte_tel_data_add_dict_u64(d, "rx_adapter_id", rx_adapter_id);
> + rte_tel_data_add_dict_u64(d, "eth_dev_id", eth_dev_id);
> + rte_tel_data_add_dict_u64(d, "rx_queue_id", rx_queue_id);
> + RXA_ADD_DICT(queue_conf, rx_queue_flags);
> + RXA_ADD_DICT(queue_conf, servicing_weight);
> + RXA_ADD_DICT(queue_conf.ev, queue_id);
> + RXA_ADD_DICT(queue_conf.ev, sched_type);
> + RXA_ADD_DICT(queue_conf.ev, priority);
> + RXA_ADD_DICT(queue_conf.ev, flow_id);
> +
> + return 0;
> +}
> +
> +RTE_INIT(rxa_init_telemetry)
> +{
> + rte_telemetry_register_cmd("/eventdev/rxa_stats",
> + handle_rxa_stats,
> + "Returns Rx adapter stats. Parameter: rx_adapter_id");
> +
> + rte_telemetry_register_cmd("/eventdev/rxa_stats_reset",
> + handle_rxa_stats_reset,
> + "Reset Rx adapter stats. Parameter: rx_adapter_id");
> +
> + rte_telemetry_register_cmd("/eventdev/rxa_queue_conf",
> + handle_rxa_get_queue_conf,
> + "Returns Rx queue config. Parameter: rxa_id, DevID, que_id");
> +}
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
> index 70ca427..acabed4 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> @@ -216,6 +216,10 @@ struct rte_event_eth_rx_adapter_stats {
> /**< Eventdev enqueue count */
> uint64_t rx_enq_retry;
> /**< Eventdev enqueue retry count */
> + uint64_t rx_event_buf_count;
> + /**< Rx event buffered count */
> + uint64_t rx_event_buf_size;
Isn't ABI breakage? CI did not warn this. Isn't this a public structure?
> + /**< Rx event buffer size */
> uint64_t rx_dropped;
> /**< Received packet dropped count */
> uint64_t rx_enq_start_ts;
> --
> 2.6.4
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
2021-10-11 9:02 0% ` Andrew Rybchenko
@ 2021-10-11 15:47 0% ` Ananyev, Konstantin
2021-10-11 17:03 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-11 15:47 UTC (permalink / raw)
To: Andrew Rybchenko, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay
>
> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> > Rework fast-path ethdev functions to use rte_eth_fp_ops[].
> > While it is an API/ABI breakage, this change is intended to be
> > transparent for both users (no changes in user app is required) and
> > PMD developers (no changes in PMD is required).
> > One extra thing to note - RX/TX callback invocation will cause extra
> > function call with these changes. That might cause some insignificant
> > slowdown for code-path where RX/TX callbacks are heavily involved.
>
> I'm sorry for nit picking here and below:
>
> RX -> Rx, TX -> Tx everywhere above.
>
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> > lib/ethdev/ethdev_private.c | 31 +++++
> > lib/ethdev/rte_ethdev.h | 242 ++++++++++++++++++++++++++----------
> > lib/ethdev/version.map | 3 +
> > 3 files changed, 208 insertions(+), 68 deletions(-)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 3eeda6e9f9..1222c6f84e 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > fpo->txq.data = dev->data->tx_queues;
> > fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > }
> > +
> > +uint16_t
> > +rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> > + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> > + void *opaque)
> > +{
> > + const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > + while (cb != NULL) {
> > + nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> > + nb_pkts, cb->param);
> > + cb = cb->next;
> > + }
> > +
> > + return nb_rx;
> > +}
> > +
> > +uint16_t
> > +rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> > + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> > +{
> > + const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > + while (cb != NULL) {
> > + nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> > + cb->param);
> > + cb = cb->next;
> > + }
> > +
> > + return nb_pkts;
> > +}
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index cdd16d6e57..c0e1a40681 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
> >
> > #include <rte_ethdev_core.h>
> >
> > +/**
> > + * @internal
> > + * Helper routine for eth driver rx_burst API.
>
> rx -> Rx
>
> > + * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
> > + * Does necessary post-processing - invokes RX callbacks if any, etc.
>
> RX -> Rx
>
> > + *
> > + * @param port_id
> > + * The port identifier of the Ethernet device.
> > + * @param queue_id
> > + * The index of the receive queue from which to retrieve input packets.
>
> Isn't:
> The index of the queue from which packets are received from?
I copied it from comments from rte_eth_rx_burst().
I suppose it is just two ways to say the same thing.
>
> > + * @param rx_pkts
> > + * The address of an array of pointers to *rte_mbuf* structures that
> > + * have been retrieved from the device.
> > + * @param nb_pkts
>
> Should be @param nb_rx
Ack, will fix.
>
> > + * The number of packets that were retrieved from the device.
> > + * @param nb_pkts
> > + * The number of elements in *rx_pkts* array.
>
> @p should be used to refer to a paramter.
To be more precise you are talking about:
s/*rx_pkts*/@ rx_pkts/
?
>
> The description does not help to understand why both nb_rx and
> nb_pkts are necessary. Isn't nb_pkts >= nb_rx and nb_rx
> sufficient?
Nope, that's for callbacks call.
Will update the comment.
> > + * @param opaque
> > + * Opaque pointer of RX queue callback related data.
>
> RX -> Rx
>
> > + *
> > + * @return
> > + * The number of packets effectively supplied to the *rx_pkts* array.
>
> @p should be used to refer to a parameter.
>
> > + */
> > +uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> > + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> > + void *opaque);
> > +
> > /**
> > *
> > * Retrieve a burst of input packets from a receive queue of an Ethernet
> > @@ -4995,23 +5022,37 @@ static inline uint16_t
> > rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> > struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> > {
> > - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > uint16_t nb_rx;
> > + struct rte_eth_fp_ops *p;
>
> p is typically a very bad name in a funcion with
> many pointer variables etc. May be "fpo" as in previous
> patch?
>
> > + void *cb, *qd;
>
> Please, avoid variable, expecially pointers, declaration in
> one line.
Here and in other places, I think local variable names and placement,
is just a matter of personal preference.
>
> I'd suggest to use 'rxq' instead of 'qd'. The first paramter
> of the rx_pkt_burst is 'rxq'.
>
> Also 'cb' seems to be used under RTE_ETHDEV_RXTX_CALLBACKS
> only. If so, it could be unused variable warning if
> RTE_ETHDEV_RXTX_CALLBACKS is not defined.
Good point, will move it back under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_RX
> > + if (port_id >= RTE_MAX_ETHPORTS ||
> > + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Invalid port_id=%u or queue_id=%u\n",
> > + port_id, queue_id);
> > + return 0;
> > + }
> > +#endif
> > +
> > + /* fetch pointer to queue data */
> > + p = &rte_eth_fp_ops[port_id];
> > + qd = p->rxq.data[queue_id];
> >
> > #ifdef RTE_ETHDEV_DEBUG_RX
> > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> > - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
> >
> > - if (queue_id >= dev->data->nb_rx_queues) {
> > - RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
> > + if (qd == NULL) {
> > + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
>
> RX -> Rx
>
> > + queue_id, port_id);
> > return 0;
> > }
> > #endif
> > - nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
> > - rx_pkts, nb_pkts);
> > +
> > + nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
> >
> > #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> > - struct rte_eth_rxtx_callback *cb;
> >
> > /* __ATOMIC_RELEASE memory order was used when the
> > * call back was inserted into the list.
> > @@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> > * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
> > * not required.
> > */
> > - cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
> > - __ATOMIC_RELAXED);
> > -
> > - if (unlikely(cb != NULL)) {
> > - do {
> > - nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> > - nb_pkts, cb->param);
> > - cb = cb->next;
> > - } while (cb != NULL);
> > - }
> > + cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
> > + if (unlikely(cb != NULL))
> > + nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
> > + nb_rx, nb_pkts, cb);
> > #endif
> >
> > rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
> > @@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> > static inline int
> > rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> > {
> > - struct rte_eth_dev *dev;
> > + struct rte_eth_fp_ops *p;
> > + void *qd;
>
> p -> fpo, qd -> rxq
>
> > +
> > + if (port_id >= RTE_MAX_ETHPORTS ||
> > + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Invalid port_id=%u or queue_id=%u\n",
> > + port_id, queue_id);
> > + return -EINVAL;
> > + }
> > +
> > + /* fetch pointer to queue data */
> > + p = &rte_eth_fp_ops[port_id];
> > + qd = p->rxq.data[queue_id];
> >
> > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > - dev = &rte_eth_devices[port_id];
> > - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
> > - if (queue_id >= dev->data->nb_rx_queues ||
> > - dev->data->rx_queues[queue_id] == NULL)
> > + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
> > + if (qd == NULL)
> > return -EINVAL;
> >
> > - return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
> > + return (int)(*p->rx_queue_count)(qd);
> > }
> >
> > /**@{@name Rx hardware descriptor states
> > @@ -5108,21 +5154,30 @@ static inline int
> > rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
> > uint16_t offset)
> > {
> > - struct rte_eth_dev *dev;
> > - void *rxq;
> > + struct rte_eth_fp_ops *p;
> > + void *qd;
>
> p -> fpo, qd -> rxq
>
> >
> > #ifdef RTE_ETHDEV_DEBUG_RX
> > - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + if (port_id >= RTE_MAX_ETHPORTS ||
> > + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Invalid port_id=%u or queue_id=%u\n",
> > + port_id, queue_id);
> > + return -EINVAL;
> > + }
> > #endif
> > - dev = &rte_eth_devices[port_id];
> > +
> > + /* fetch pointer to queue data */
> > + p = &rte_eth_fp_ops[port_id];
> > + qd = p->rxq.data[queue_id];
> > +
> > #ifdef RTE_ETHDEV_DEBUG_RX
> > - if (queue_id >= dev->data->nb_rx_queues)
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + if (qd == NULL)
> > return -ENODEV;
> > #endif
> > - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
> > - rxq = dev->data->rx_queues[queue_id];
> > -
> > - return (*dev->rx_descriptor_status)(rxq, offset);
> > + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
> > + return (*p->rx_descriptor_status)(qd, offset);
> > }
> >
> > /**@{@name Tx hardware descriptor states
> > @@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
> > static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
> > uint16_t queue_id, uint16_t offset)
> > {
> > - struct rte_eth_dev *dev;
> > - void *txq;
> > + struct rte_eth_fp_ops *p;
> > + void *qd;
>
> p -> fpo, qd -> txq
>
> >
> > #ifdef RTE_ETHDEV_DEBUG_TX
> > - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + if (port_id >= RTE_MAX_ETHPORTS ||
> > + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Invalid port_id=%u or queue_id=%u\n",
> > + port_id, queue_id);
> > + return -EINVAL;
> > + }
> > #endif
> > - dev = &rte_eth_devices[port_id];
> > +
> > + /* fetch pointer to queue data */
> > + p = &rte_eth_fp_ops[port_id];
> > + qd = p->txq.data[queue_id];
> > +
> > #ifdef RTE_ETHDEV_DEBUG_TX
> > - if (queue_id >= dev->data->nb_tx_queues)
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + if (qd == NULL)
> > return -ENODEV;
> > #endif
> > - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
> > - txq = dev->data->tx_queues[queue_id];
> > -
> > - return (*dev->tx_descriptor_status)(txq, offset);
> > + RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
> > + return (*p->tx_descriptor_status)(qd, offset);
> > }
> >
> > +/**
> > + * @internal
> > + * Helper routine for eth driver tx_burst API.
> > + * Should be called before entry PMD's rte_eth_tx_bulk implementation.
> > + * Does necessary pre-processing - invokes TX callbacks if any, etc.
>
> TX -> Tx
>
> > + *
> > + * @param port_id
> > + * The port identifier of the Ethernet device.
> > + * @param queue_id
> > + * The index of the transmit queue through which output packets must be
> > + * sent.
> > + * @param tx_pkts
> > + * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
> > + * which contain the output packets.
>
> *nb_pkts* -> @p nb_pkts
>
> > + * @param nb_pkts
> > + * The maximum number of packets to transmit.
> > + * @return
> > + * The number of output packets to transmit.
> > + */
> > +uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> > + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
> > +
> > /**
> > * Send a burst of output packets on a transmit queue of an Ethernet device.
> > *
> > @@ -5256,20 +5342,34 @@ static inline uint16_t
> > rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
> > struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> > {
> > - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > + struct rte_eth_fp_ops *p;
> > + void *cb, *qd;
>
> Same as above
>
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_TX
> > + if (port_id >= RTE_MAX_ETHPORTS ||
> > + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Invalid port_id=%u or queue_id=%u\n",
> > + port_id, queue_id);
> > + return 0;
> > + }
> > +#endif
> > +
> > + /* fetch pointer to queue data */
> > + p = &rte_eth_fp_ops[port_id];
> > + qd = p->txq.data[queue_id];
> >
> > #ifdef RTE_ETHDEV_DEBUG_TX
> > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> > - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
> >
> > - if (queue_id >= dev->data->nb_tx_queues) {
> > - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> > + if (qd == NULL) {
> > + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
>
> TX -> Tx
>
> > + queue_id, port_id);
> > return 0;
> > }
> > #endif
> >
> > #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> > - struct rte_eth_rxtx_callback *cb;
> >
> > /* __ATOMIC_RELEASE memory order was used when the
> > * call back was inserted into the list.
> > @@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
> > * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
> > * not required.
> > */
> > - cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
> > - __ATOMIC_RELAXED);
> > -
> > - if (unlikely(cb != NULL)) {
> > - do {
> > - nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> > - cb->param);
> > - cb = cb->next;
> > - } while (cb != NULL);
> > - }
> > + cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
> > + if (unlikely(cb != NULL))
> > + nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
> > + nb_pkts, cb);
> > #endif
> >
> > - rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
> > - nb_pkts);
> > - return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
> > + nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
> > +
> > + rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
> > + return nb_pkts;
> > }
> >
> > /**
> > @@ -5354,31 +5449,42 @@ static inline uint16_t
> > rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
> > struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> > {
> > - struct rte_eth_dev *dev;
> > + struct rte_eth_fp_ops *p;
> > + void *qd;
>
> p->fpo, qd->txq
>
> >
> > #ifdef RTE_ETHDEV_DEBUG_TX
> > - if (!rte_eth_dev_is_valid_port(port_id)) {
> > - RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
> > + if (port_id >= RTE_MAX_ETHPORTS ||
> > + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Invalid port_id=%u or queue_id=%u\n",
> > + port_id, queue_id);
> > rte_errno = ENODEV;
> > return 0;
> > }
> > #endif
> >
> > - dev = &rte_eth_devices[port_id];
> > + /* fetch pointer to queue data */
> > + p = &rte_eth_fp_ops[port_id];
> > + qd = p->txq.data[queue_id];
> >
> > #ifdef RTE_ETHDEV_DEBUG_TX
> > - if (queue_id >= dev->data->nb_tx_queues) {
> > - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> > + if (!rte_eth_dev_is_valid_port(port_id)) {
> > + RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
>
> TX -> Tx
>
> > + rte_errno = ENODEV;
> > + return 0;
> > + }
> > + if (qd == NULL) {
> > + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
>
> TX -> Tx
>
> > + queue_id, port_id);
> > rte_errno = EINVAL;
> > return 0;
> > }
> > #endif
> >
> > - if (!dev->tx_pkt_prepare)
> > + if (!p->tx_pkt_prepare)
>
> Please, change it to compare vs NULL since you touch the line.
> Just to be consistent with DPDK coding style and lines above.
Ok, I am also fond of explicit comparisons :)
> > return nb_pkts;
> >
> > - return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
> > - tx_pkts, nb_pkts);
> > + return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
> > }
> >
> > #else
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index 904bce6ea1..79e62dcf61 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -7,6 +7,8 @@ DPDK_22 {
> > rte_eth_allmulticast_disable;
> > rte_eth_allmulticast_enable;
> > rte_eth_allmulticast_get;
> > + rte_eth_call_rx_callbacks;
> > + rte_eth_call_tx_callbacks;
> > rte_eth_dev_adjust_nb_rx_tx_desc;
> > rte_eth_dev_callback_register;
> > rte_eth_dev_callback_unregister;
> > @@ -76,6 +78,7 @@ DPDK_22 {
> > rte_eth_find_next_of;
> > rte_eth_find_next_owned_by;
> > rte_eth_find_next_sibling;
> > + rte_eth_fp_ops;
> > rte_eth_iterator_cleanup;
> > rte_eth_iterator_init;
> > rte_eth_iterator_next;
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
2021-10-09 12:05 0% ` fengchengwen
2021-10-11 1:18 0% ` fengchengwen
2021-10-11 8:35 0% ` Andrew Rybchenko
@ 2021-10-11 15:15 0% ` Ananyev, Konstantin
2 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-11 15:15 UTC (permalink / raw)
To: fengchengwen, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
maxime.coquelin, Xia, Chenbo, thomas, Yigit, Ferruh, mdr,
Jayatheerthan, Jay
> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a
> > separate flat array. That array will remain in a public header.
> > The intention here is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev structures
> > to be transparent to the user and help to avoid ABI/API breakages.
> > The plan is to keep minimal part of data from rte_eth_dev public,
> > so we still can use inline functions for fast-path calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > The whole idea beyond this new schema:
> > 1. PMDs keep to setup fast-path function pointers and related data
> > inside rte_eth_dev struct in the same way they did it before.
> > 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
> > (for secondary process) we call eth_dev_fp_ops_setup, which
> > copies these function and data pointers into rte_eth_fp_ops[port_id].
> > 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
> > we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
> > into some dummy values.
> > 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
> > flat array to call PMD specific functions.
> > That approach should allow us to make rte_eth_devices[] private
> > without introducing regression and help to avoid changes in drivers code.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> > lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++
> > lib/ethdev/ethdev_private.h | 7 +++++
> > lib/ethdev/rte_ethdev.c | 27 ++++++++++++++++++
> > lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
> > 4 files changed, 141 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 012cf73ca2..3eeda6e9f9 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> > RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> > return str == NULL ? -1 : 0;
> > }
> > +
> > +static uint16_t
> > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > + __rte_unused struct rte_mbuf **rx_pkts,
> > + __rte_unused uint16_t nb_pkts)
> > +{
> > + RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> > + rte_errno = ENOTSUP;
> > + return 0;
> > +}
> > +
> > +static uint16_t
> > +dummy_eth_tx_burst(__rte_unused void *txq,
> > + __rte_unused struct rte_mbuf **tx_pkts,
> > + __rte_unused uint16_t nb_pkts)
> > +{
> > + RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> > + rte_errno = ENOTSUP;
> > + return 0;
> > +}
> > +
> > +void
> > +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
>
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.
Why do we need to hide it here?
rte_eth_fp_ops is a public structure, and it is a helper function that
just resets fields of this structure to some predefined dummy values.
Nice and simple, so I prefer to keep it like that.
>
> > +{
> > + static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> > + static const struct rte_eth_fp_ops dummy_ops = {
> > + .rx_pkt_burst = dummy_eth_rx_burst,
> > + .tx_pkt_burst = dummy_eth_tx_burst,
> > + .rxq = {.data = dummy_data, .clbk = dummy_data,},
> > + .txq = {.data = dummy_data, .clbk = dummy_data,},
> > + };
> > +
> > + *fpo = dummy_ops;
> > +}
> > +
> > +void
> > +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > + const struct rte_eth_dev *dev)
>
> Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
> port_id parameter.
Same as above:
All this internal helper function does - copies some fields from one structure to another.
Both structures are visible by ethdev layer.
No point to add extra assumptions and complexity here.
>
> > +{
> > + fpo->rx_pkt_burst = dev->rx_pkt_burst;
> > + fpo->tx_pkt_burst = dev->tx_pkt_burst;
> > + fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> > + fpo->rx_queue_count = dev->rx_queue_count;
> > + fpo->rx_descriptor_status = dev->rx_descriptor_status;
> > + fpo->tx_descriptor_status = dev->tx_descriptor_status;
> > +
> > + fpo->rxq.data = dev->data->rx_queues;
> > + fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> > +
> > + fpo->txq.data = dev->data->tx_queues;
> > + fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > +}
> > diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> > index 3724429577..5721be7bdc 100644
> > --- a/lib/ethdev/ethdev_private.h
> > +++ b/lib/ethdev/ethdev_private.h
> > @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> > /* Parse devargs value for representor parameter. */
> > int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> >
> > +/* reset eth fast-path API to dummy values */
> > +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> > +
> > +/* setup eth fast-path API to ethdev values */
> > +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > + const struct rte_eth_dev *dev);
>
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:
This text above is a bit too cryptic for me...
Are you saying that your driver changes rte_eth_dev.rx_pkt_burst(/ tx_pkt_burst) on the fly
(after dev_start() and before dev_stop())?
If so, then generally speaking, it is a bad idea.
While it might works for some limited scenarios, right now it is not supported by ethdev framework,
and might introduce a lot of problems.
> 1. it is recommended that trace be deleted from the dummy function.
You are talking about:
RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
right?
Dummy function is supposed to be set only when device is not able to do RX/TX properly
(not attached, or attached but not configured, or attached and configured, but not started).
Obviously if app calls rx/tx_burst for such port it is a major issue, that should be flagged imemdiatelly.
So I believe having log here makes a perfect sense here.
> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.
You mean move their declarations into ethdev_driver.h?
I suppose that could be done, but still wonder why driver would need to
call these functions directly?
> > +
> > #endif /* _ETH_PRIVATE_H_ */
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index c8abda6dd7..9f7a0cbb8c 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -44,6 +44,9 @@
> > static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> > struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> >
> > +/* public fast-path API */
> > +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> > +
> > /* spinlock for eth device callbacks */
> > static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> >
> > @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
> > rte_eth_dev_callback_process(eth_dev,
> > RTE_ETH_EVENT_DESTROY, NULL);
> >
> > + eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
> > +
> > rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
> >
> > eth_dev->state = RTE_ETH_DEV_UNUSED;
> > @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
> > (*dev->dev_ops->link_update)(dev, 0);
> > }
> >
> > + /* expose selection of PMD fast-path functions */
> > + eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> > +
> > rte_ethdev_trace_start(port_id);
> > return 0;
> > }
> > @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
> > return 0;
> > }
> >
> > + /* point fast-path functions to dummy ones */
> > + eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> > +
> > dev->data->dev_started = 0;
> > ret = (*dev->dev_ops->dev_stop)(dev);
> > rte_ethdev_trace_stop(port_id, ret);
> > @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
> > return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
> > }
> >
> > +RTE_INIT(eth_dev_init_fp_ops)
> > +{
> > + uint32_t i;
> > +
> > + for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
> > + eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
> > +}
> > +
> > RTE_INIT(eth_dev_init_cb_lists)
> > {
> > uint16_t i;
> > @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
> > if (dev == NULL)
> > return;
> >
> > + /*
> > + * for secondary process, at that point we expect device
> > + * to be already 'usable', so shared data and all function pointers
> > + * for fast-path devops have to be setup properly inside rte_eth_dev.
> > + */
> > + if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > + eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> > +
> > rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
> >
> > dev->state = RTE_ETH_DEV_ATTACHED;
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index 51cd68de94..d5853dff86 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> > typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> > /**< @internal Check the status of a Tx descriptor */
> >
> > +/**
> > + * @internal
> > + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> > + * queues data.
> > + * The main purpose to expose these pointers at all - allow compiler
> > + * to fetch this data for fast-path ethdev inline functions in advance.
> > + */
> > +struct rte_ethdev_qdata {
> > + void **data;
> > + /**< points to array of internal queue data pointers */
> > + void **clbk;
> > + /**< points to array of queue callback data pointers */
> > +};
> > +
> > +/**
> > + * @internal
> > + * fast-path ethdev functions and related data are hold in a flat array.
> > + * One entry per ethdev.
> > + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> > + * On 32-bit systems contents of this structure fits into one 64B line.
> > + */
> > +struct rte_eth_fp_ops {
> > +
> > + /**
> > + * Rx fast-path functions and related data.
> > + * 64-bit systems: occupies first 64B line
> > + */
> > + eth_rx_burst_t rx_pkt_burst;
> > + /**< PMD receive function. */
> > + eth_rx_queue_count_t rx_queue_count;
> > + /**< Get the number of used RX descriptors. */
> > + eth_rx_descriptor_status_t rx_descriptor_status;
> > + /**< Check the status of a Rx descriptor. */
> > + struct rte_ethdev_qdata rxq;
> > + /**< Rx queues data. */
> > + uintptr_t reserved1[3];
> > +
> > + /**
> > + * Tx fast-path functions and related data.
> > + * 64-bit systems: occupies second 64B line
> > + */
> > + eth_tx_burst_t tx_pkt_burst;
>
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.
I suppose you are talking about layout like that:
struct rte_eth_fp_ops {
/* first 64B line */
rx_pkt_burst;
tx_pkt_burst;
tx_pkt_prepare;
struct rte_ethdev_qdata rxq;
struct rte_ethdev_qdata txq;
reserved1[1];
/* second 64B line */
...
};
I thought about such ability, even tried it, but I didn't see any performance gain.
From other side current layout seems better to me from structural point:
it is more uniform and easy to extend in future (both RX and TX data occupies
separate 64B line, each have equal rom for extension).
> > + /**< PMD transmit function. */
> > + eth_tx_prep_t tx_pkt_prepare;
> > + /**< PMD transmit prepare function. */
> > + eth_tx_descriptor_status_t tx_descriptor_status;
> > + /**< Check the status of a Tx descriptor. */
> > + struct rte_ethdev_qdata txq;
> > + /**< Tx queues data. */
> > + uintptr_t reserved2[3];
> > +
> > +} __rte_cache_aligned;
> > +
> > +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> > +
> >
> > /**
> > * @internal
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array
2021-10-11 12:43 3% ` [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array Akhil Goyal
@ 2021-10-11 14:54 0% ` Zhang, Roy Fan
0 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-11 14:54 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
Ciara
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Monday, October 11, 2021 1:43 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat
> array
>
> Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
> 1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index ce0dca72be..739ad529e5 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -1832,13 +1832,18 @@ static inline uint16_t
> rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
> struct rte_crypto_op **ops, uint16_t nb_ops)
> {
> - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> + struct rte_crypto_fp_ops *fp_ops;
We may need to use const for fp_ops since we only call the function pointers in it.
> + void *qp;
>
> rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops,
> nb_ops);
> - nb_ops = (*dev->dequeue_burst)
> - (dev->data->queue_pairs[qp_id], ops, nb_ops);
> +
> + fp_ops = &rte_crypto_fp_ops[dev_id];
> + qp = fp_ops->qp.data[qp_id];
> +
> + nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
> +
> #ifdef RTE_CRYPTO_CALLBACKS
> - if (unlikely(dev->deq_cbs != NULL)) {
> + if (unlikely(fp_ops->qp.deq_cb != NULL)) {
> struct rte_cryptodev_cb_rcu *list;
> struct rte_cryptodev_cb *cb;
>
> @@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id,
> uint16_t qp_id,
> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory
> order is
> * not required.
> */
> - list = &dev->deq_cbs[qp_id];
> + list = (struct rte_cryptodev_cb_rcu *)&fp_ops-
> >qp.deq_cb[qp_id];
> rte_rcu_qsbr_thread_online(list->qsbr, 0);
> cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
>
> @@ -1899,10 +1904,13 @@ static inline uint16_t
> rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
> struct rte_crypto_op **ops, uint16_t nb_ops)
> {
> - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> + struct rte_crypto_fp_ops *fp_ops;
Same as above
> + void *qp;
>
> + fp_ops = &rte_crypto_fp_ops[dev_id];
> + qp = fp_ops->qp.data[qp_id];
> #ifdef RTE_CRYPTO_CALLBACKS
> - if (unlikely(dev->enq_cbs != NULL)) {
> + if (unlikely(fp_ops->qp.enq_cb != NULL)) {
> struct rte_cryptodev_cb_rcu *list;
> struct rte_cryptodev_cb *cb;
>
> @@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id,
> uint16_t qp_id,
> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory
> order is
> * not required.
> */
> - list = &dev->enq_cbs[qp_id];
> + list = (struct rte_cryptodev_cb_rcu *)&fp_ops-
> >qp.enq_cb[qp_id];
> rte_rcu_qsbr_thread_online(list->qsbr, 0);
> cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
>
> @@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id,
> uint16_t qp_id,
> #endif
>
> rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops,
> nb_ops);
> - return (*dev->enqueue_burst)(
> - dev->data->queue_pairs[qp_id], ops, nb_ops);
> + return fp_ops->enqueue_burst(qp, ops, nb_ops);
> }
>
>
> --
> 2.25.1
Other than the minor comments above
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers
@ 2021-10-11 14:48 2% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev
Pickup new FW interface definitions.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx_regs_mcdi.h | 1211 ++++++++++++++++++-
1 file changed, 1176 insertions(+), 35 deletions(-)
diff --git a/drivers/common/sfc_efx/base/efx_regs_mcdi.h b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
index a3c9f076ec..2daf825a36 100644
--- a/drivers/common/sfc_efx/base/efx_regs_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
@@ -492,6 +492,24 @@
*/
#define MAE_FIELD_SUPPORTED_MATCH_MASK 0x5
+/* MAE_CT_VNI_MODE enum: Controls the layout of the VNI input to the conntrack
+ * lookup. (Values are not arbitrary - constrained by table access ABI.)
+ */
+/* enum: The VNI input to the conntrack lookup will be zero. */
+#define MAE_CT_VNI_MODE_ZERO 0x0
+/* enum: The VNI input to the conntrack lookup will be the VNI (VXLAN/Geneve)
+ * or VSID (NVGRE) field from the packet.
+ */
+#define MAE_CT_VNI_MODE_VNI 0x1
+/* enum: The VNI input to the conntrack lookup will be the VLAN ID from the
+ * outermost VLAN tag (in bottom 12 bits; top 12 bits zero).
+ */
+#define MAE_CT_VNI_MODE_1VLAN 0x2
+/* enum: The VNI input to the conntrack lookup will be the VLAN IDs from both
+ * VLAN tags (outermost in bottom 12 bits, innermost in top 12 bits).
+ */
+#define MAE_CT_VNI_MODE_2VLAN 0x3
+
/* MAE_FIELD enum: NB: this enum shares namespace with the support status enum.
*/
/* enum: Source mport upon entering the MAE. */
@@ -617,7 +635,8 @@
/* MAE_MCDI_ENCAP_TYPE enum: Encapsulation type. Defines how the payload will
* be parsed to an inner frame. Other values are reserved. Unknown values
- * should be treated same as NONE.
+ * should be treated same as NONE. (Values are not arbitrary - constrained by
+ * table access ABI.)
*/
#define MAE_MCDI_ENCAP_TYPE_NONE 0x0 /* enum */
/* enum: Don't assume enum aligns with support bitmask... */
@@ -634,6 +653,18 @@
/* enum: Selects the virtual NIC plugged into the MAE switch */
#define MAE_MPORT_END_VNIC 0x2
+/* MAE_COUNTER_TYPE enum: The datapath maintains several sets of counters, each
+ * being associated with a different table. Note that the same counter ID may
+ * be allocated by different counter blocks, so e.g. AR counter 42 is different
+ * from CT counter 42. Generation counts are also type-specific. This value is
+ * also present in the header of streaming counter packets, in the IDENTIFIER
+ * field (see packetiser packet format definitions).
+ */
+/* enum: Action Rule counters - can be referenced in AR response. */
+#define MAE_COUNTER_TYPE_AR 0x0
+/* enum: Conntrack counters - can be referenced in CT response. */
+#define MAE_COUNTER_TYPE_CT 0x1
+
/* MCDI_EVENT structuredef: The structure of an MCDI_EVENT on Siena/EF10/EF100
* platforms
*/
@@ -4547,6 +4578,8 @@
#define MC_CMD_MEDIA_BASE_T 0x6
/* enum: QSFP+. */
#define MC_CMD_MEDIA_QSFP_PLUS 0x7
+/* enum: DSFP. */
+#define MC_CMD_MEDIA_DSFP 0x8
#define MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_OFST 48
#define MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_LEN 4
/* enum: Native clause 22 */
@@ -7823,11 +7856,16 @@
/***********************************/
/* MC_CMD_GET_PHY_MEDIA_INFO
* Read media-specific data from PHY (e.g. SFP/SFP+ module ID information for
- * SFP+ PHYs). The 'media type' can be found via GET_PHY_CFG
- * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid 'page number' input values, and the
- * output data, are interpreted on a per-type basis. For SFP+: PAGE=0 or 1
+ * SFP+ PHYs). The "media type" can be found via GET_PHY_CFG
+ * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid "page number" input values, and the
+ * output data, are interpreted on a per-type basis. For SFP+, PAGE=0 or 1
* returns a 128-byte block read from module I2C address 0xA0 offset 0 or 0x80.
- * Anything else: currently undefined. Locks required: None. Return code: 0.
+ * For QSFP, PAGE=-1 is the lower (unbanked) page. PAGE=2 is the EEPROM and
+ * PAGE=3 is the module limits. For DSFP, module addressing requires a
+ * "BANK:PAGE". Not every bank has the same number of pages. See the Common
+ * Management Interface Specification (CMIS) for further details. A BANK:PAGE
+ * of "0xffff:0xffff" retrieves the lower (unbanked) page. Locks required -
+ * None. Return code - 0.
*/
#define MC_CMD_GET_PHY_MEDIA_INFO 0x4b
#define MC_CMD_GET_PHY_MEDIA_INFO_MSGSET 0x4b
@@ -7839,6 +7877,12 @@
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_LEN 4
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_OFST 0
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_LEN 4
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_OFST 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_LBN 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_WIDTH 16
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_OFST 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_LBN 16
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_WIDTH 16
/* MC_CMD_GET_PHY_MEDIA_INFO_OUT msgresponse */
#define MC_CMD_GET_PHY_MEDIA_INFO_OUT_LENMIN 5
@@ -9350,6 +9394,8 @@
#define NVRAM_PARTITION_TYPE_FPGA_JUMP 0xb08
/* enum: FPGA Validate XCLBIN */
#define NVRAM_PARTITION_TYPE_FPGA_XCLBIN_VALIDATE 0xb09
+/* enum: FPGA XOCL Configuration information */
+#define NVRAM_PARTITION_TYPE_FPGA_XOCL_CONFIG 0xb0a
/* enum: MUM firmware partition */
#define NVRAM_PARTITION_TYPE_MUM_FIRMWARE 0xc00
/* enum: SUC firmware partition (this is intentionally an alias of
@@ -9427,6 +9473,8 @@
#define NVRAM_PARTITION_TYPE_BUNDLE_LOG 0x1e02
/* enum: Partition for Solarflare gPXE bootrom installed via Bundle update. */
#define NVRAM_PARTITION_TYPE_EXPANSION_ROM_INTERNAL 0x1e03
+/* enum: Partition to store ASN.1 format Bundle Signature for checking. */
+#define NVRAM_PARTITION_TYPE_BUNDLE_SIGNATURE 0x1e04
/* enum: Test partition on SmartNIC system microcontroller (SUC) */
#define NVRAM_PARTITION_TYPE_SUC_TEST 0x1f00
/* enum: System microcontroller access to primary FPGA flash. */
@@ -10051,6 +10099,158 @@
#define MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
#define MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+/* MC_CMD_INIT_EVQ_V3_IN msgrequest: Extended request to specify per-queue
+ * event merge timeouts.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_LEN 556
+/* Size, in entries */
+#define MC_CMD_INIT_EVQ_V3_IN_SIZE_OFST 0
+#define MC_CMD_INIT_EVQ_V3_IN_SIZE_LEN 4
+/* Desired instance. Must be set to a specific instance, which is a function
+ * local queue index. The calling client must be the currently-assigned user of
+ * this VI (see MC_CMD_SET_VI_USER).
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_INSTANCE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_IN_INSTANCE_LEN 4
+/* The initial timer value. The load value is ignored if the timer mode is DIS.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_OFST 8
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_LEN 4
+/* The reload value is ignored in one-shot modes */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_OFST 12
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_LEN 4
+/* tbd */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAGS_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAGS_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_LBN 0
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_LBN 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_LBN 2
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_LBN 3
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_LBN 4
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_LBN 5
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_LBN 6
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LBN 7
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_WIDTH 4
+/* enum: All initialisation flags specified by host. */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_MANUAL 0x0
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the lowest latency achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LOW_LATENCY 0x1
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the best throughput achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_THROUGHPUT 0x2
+/* enum: MEDFORD only. Certain initialisation flags may be over-ridden by
+ * firmware based on licenses and firmware variant. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_AUTO 0x3
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_LBN 11
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_OFST 20
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_LEN 4
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_DIS 0x0
+/* enum: Immediate */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_IMMED_START 0x1
+/* enum: Triggered */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_TRIG_START 0x2
+/* enum: Hold-off */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_INT_HLDOFF 0x3
+/* Target EVQ for wakeups if in wakeup mode. */
+#define MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_OFST 24
+#define MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_LEN 4
+/* Target interrupt if in interrupting mode (note union with target EVQ). Use
+ * MC_CMD_RESOURCE_INSTANCE_ANY unless a specific one required for test
+ * purposes.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_OFST 24
+#define MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_LEN 4
+/* Event Counter Mode. */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_OFST 28
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_LEN 4
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_DIS 0x0
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RX 0x1
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_TX 0x2
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RXTX 0x3
+/* Event queue packet count threshold. */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_OFST 32
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_LEN 4
+/* 64-bit address of 4k of 4k-aligned host memory buffer */
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_OFST 36
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LEN 8
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_OFST 36
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LBN 288
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_WIDTH 32
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_OFST 40
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LBN 320
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_WIDTH 32
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MINNUM 1
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
+/* Receive event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_RX_MERGE is not set.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_OFST 548
+#define MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_LEN 4
+/* Transmit event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_TX_MERGE is not set.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_OFST 552
+#define MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_LEN 4
+
+/* MC_CMD_INIT_EVQ_V3_OUT msgresponse */
+#define MC_CMD_INIT_EVQ_V3_OUT_LEN 8
+/* Only valid if INTRFLAG was true */
+#define MC_CMD_INIT_EVQ_V3_OUT_IRQ_OFST 0
+#define MC_CMD_INIT_EVQ_V3_OUT_IRQ_LEN 4
+/* Actual configuration applied on the card */
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAGS_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAGS_LEN 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_LBN 0
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_LBN 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_LBN 2
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+
/* QUEUE_CRC_MODE structuredef */
#define QUEUE_CRC_MODE_LEN 1
#define QUEUE_CRC_MODE_MODE_LBN 0
@@ -10256,7 +10456,9 @@
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10360,7 +10562,9 @@
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10493,7 +10697,9 @@
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10639,7 +10845,9 @@
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10878,7 +11086,7 @@
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 1
+#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 0
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM 64
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Flags related to Qbb flow control mode. */
@@ -12228,6 +12436,8 @@
* rules inserted by MC_CMD_VNIC_ENCAP_RULE_ADD. (ef100 and later)
*/
#define MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_MATCHES 0x5
+/* enum: read the supported encapsulation types for the VNIC */
+#define MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_TYPES 0x6
/* MC_CMD_GET_PARSER_DISP_INFO_OUT msgresponse */
#define MC_CMD_GET_PARSER_DISP_INFO_OUT_LENMIN 8
@@ -12336,6 +12546,30 @@
#define MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM 61
#define MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM_MCDI2 253
+/* MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT msgresponse: Returns
+ * the supported encapsulation types for the VNIC
+ */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_LEN 8
+/* The op code OP_GET_SUPPORTED_VNIC_ENCAP_TYPES is returned */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_OFST 0
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_LEN 4
+/* Enum values, see field(s): */
+/* MC_CMD_GET_PARSER_DISP_INFO_IN/OP */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+
/***********************************/
/* MC_CMD_PARSER_DISP_RW
@@ -16236,6 +16470,9 @@
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* MC_CMD_GET_CAPABILITIES_V8_OUT msgresponse */
#define MC_CMD_GET_CAPABILITIES_V8_OUT_LEN 160
@@ -16734,6 +16971,9 @@
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -17246,6 +17486,9 @@
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -17793,6 +18036,9 @@
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -19900,6 +20146,18 @@
#define MC_CMD_GET_FUNCTION_INFO_OUT_VF_OFST 4
#define MC_CMD_GET_FUNCTION_INFO_OUT_VF_LEN 4
+/* MC_CMD_GET_FUNCTION_INFO_OUT_V2 msgresponse */
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN 12
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_OFST 0
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_LEN 4
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_OFST 4
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_LEN 4
+/* Values from PCIE_INTERFACE enumeration. For NICs with a single interface, or
+ * in the case of a V1 response, this should be HOST_PRIMARY.
+ */
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_OFST 8
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_LEN 4
+
/***********************************/
/* MC_CMD_ENABLE_OFFLINE_BIST
@@ -25682,6 +25940,9 @@
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_LBN 6
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_LBN 7
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_WIDTH 1
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_LBN 7
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_WIDTH 1
@@ -25691,6 +25952,12 @@
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_LBN 9
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_LBN 10
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_LBN 11
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_WIDTH 1
/* MC_CMD_GET_RX_PREFIX_ID_OUT msgresponse */
#define MC_CMD_GET_RX_PREFIX_ID_OUT_LENMIN 8
@@ -25736,9 +26003,12 @@
#define RX_PREFIX_FIELD_INFO_PARTIAL_TSTAMP 0x4 /* enum */
#define RX_PREFIX_FIELD_INFO_RSS_HASH 0x5 /* enum */
#define RX_PREFIX_FIELD_INFO_USER_MARK 0x6 /* enum */
+#define RX_PREFIX_FIELD_INFO_INGRESS_MPORT 0x7 /* enum */
#define RX_PREFIX_FIELD_INFO_INGRESS_VPORT 0x7 /* enum */
#define RX_PREFIX_FIELD_INFO_CSUM_FRAME 0x8 /* enum */
#define RX_PREFIX_FIELD_INFO_VLAN_STRIP_TCI 0x9 /* enum */
+#define RX_PREFIX_FIELD_INFO_VLAN_STRIPPED 0xa /* enum */
+#define RX_PREFIX_FIELD_INFO_VSWITCH_STATUS 0xb /* enum */
#define RX_PREFIX_FIELD_INFO_TYPE_LBN 24
#define RX_PREFIX_FIELD_INFO_TYPE_WIDTH 8
@@ -26063,6 +26333,10 @@
#define MC_CMD_FPGA_IN_OP_SET_INTERNAL_LINK 0x5
/* enum: Read internal link configuration. */
#define MC_CMD_FPGA_IN_OP_GET_INTERNAL_LINK 0x6
+/* enum: Get MAC statistics of FPGA external port. */
+#define MC_CMD_FPGA_IN_OP_GET_MAC_STATS 0x7
+/* enum: Set configuration on internal FPGA MAC. */
+#define MC_CMD_FPGA_IN_OP_SET_INTERNAL_MAC 0x8
/* MC_CMD_FPGA_OP_GET_VERSION_IN msgrequest: Get the FPGA version string. A
* free-format string is returned in response to this command. Any checks on
@@ -26206,6 +26480,87 @@
#define MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_OFST 4
#define MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_LEN 4
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_IN msgrequest: Get FPGA external port MAC
+ * statistics.
+ */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_LEN 4
+/* Sub-command code. Must be OP_GET_MAC_STATS. */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_OFST 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_LEN 4
+
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_OUT msgresponse */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMIN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX 252
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LEN(num) (4+8*(num))
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_NUM(len) (((len)-4)/8)
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_OFST 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_OFST 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LEN 8
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_OFST 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LBN 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_WIDTH 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_OFST 8
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LBN 64
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_WIDTH 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MINNUM 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM 31
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM_MCDI2 127
+#define MC_CMD_FPGA_MAC_TX_TOTAL_PACKETS 0x0 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_BYTES 0x1 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_PACKETS 0x2 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_BYTES 0x3 /* enum */
+#define MC_CMD_FPGA_MAC_TX_BAD_FCS 0x4 /* enum */
+#define MC_CMD_FPGA_MAC_TX_PAUSE 0x5 /* enum */
+#define MC_CMD_FPGA_MAC_TX_USER_PAUSE 0x6 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_PACKETS 0x7 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_BYTES 0x8 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_PACKETS 0x9 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_BYTES 0xa /* enum */
+#define MC_CMD_FPGA_MAC_RX_BAD_FCS 0xb /* enum */
+#define MC_CMD_FPGA_MAC_RX_PAUSE 0xc /* enum */
+#define MC_CMD_FPGA_MAC_RX_USER_PAUSE 0xd /* enum */
+#define MC_CMD_FPGA_MAC_RX_UNDERSIZE 0xe /* enum */
+#define MC_CMD_FPGA_MAC_RX_OVERSIZE 0xf /* enum */
+#define MC_CMD_FPGA_MAC_RX_FRAMING_ERR 0x10 /* enum */
+#define MC_CMD_FPGA_MAC_FEC_UNCORRECTED_ERRORS 0x11 /* enum */
+#define MC_CMD_FPGA_MAC_FEC_CORRECTED_ERRORS 0x12 /* enum */
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN msgrequest: Configures the internal port
+ * MAC on the FPGA.
+ */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_LEN 20
+/* Sub-command code. Must be OP_SET_INTERNAL_MAC. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_OFST 0
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_LEN 4
+/* Select which parameters to configure. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_LEN 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_LBN 0
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_WIDTH 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_LBN 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_WIDTH 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_LBN 2
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_WIDTH 1
+/* The MTU to be programmed into the MAC. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_OFST 8
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_LEN 4
+/* Drain Tx FIFO */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_OFST 12
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_LEN 4
+/* flow control configuration. See MC_CMD_SET_MAC/MC_CMD_SET_MAC_IN/FCNTL. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_OFST 16
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_LEN 4
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT msgresponse */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT_LEN 0
+
/***********************************/
/* MC_CMD_EXTERNAL_MAE_GET_LINK_MODE
@@ -26483,6 +26838,12 @@
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_OFST 29
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_LBN 0
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_WIDTH 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_OFST 29
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_LBN 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_WIDTH 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_OFST 29
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_LBN 2
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_WIDTH 1
/* Only if MATCH_DST_PORT is set. Port number as bytes in network order. */
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_OFST 30
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_LEN 2
@@ -26544,6 +26905,257 @@
#define UUID_NODE_LBN 80
#define UUID_NODE_WIDTH 48
+
+/***********************************/
+/* MC_CMD_PLUGIN_ALLOC
+ * Create a handle to a datapath plugin's extension. This involves finding a
+ * currently-loaded plugin offering the given functionality (as identified by
+ * the UUID) and allocating a handle to track the usage of it. Plugin
+ * functionality is identified by 'extension' rather than any other identifier
+ * so that a single plugin bitfile may offer more than one piece of independent
+ * functionality. If two bitfiles are loaded which both offer the same
+ * extension, then the metadata is interrogated further to determine which is
+ * the newest and that is the one opened. See SF-123625-SW for architectural
+ * detail on datapath plugins.
+ */
+#define MC_CMD_PLUGIN_ALLOC 0x1ad
+#define MC_CMD_PLUGIN_ALLOC_MSGSET 0x1ad
+#undef MC_CMD_0x1ad_PRIVILEGE_CTG
+
+#define MC_CMD_0x1ad_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_ALLOC_IN msgrequest */
+#define MC_CMD_PLUGIN_ALLOC_IN_LEN 24
+/* The functionality requested of the plugin, as a UUID structure */
+#define MC_CMD_PLUGIN_ALLOC_IN_UUID_OFST 0
+#define MC_CMD_PLUGIN_ALLOC_IN_UUID_LEN 16
+/* Additional options for opening the handle */
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAGS_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_LBN 0
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_WIDTH 1
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_LBN 1
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_WIDTH 1
+/* Load the extension only if it is in the specified administrative group.
+ * Specify ANY to load the extension wherever it is found (if there are
+ * multiple choices then the extension with the highest MINOR_VER/PATCH_VER
+ * will be loaded). See MC_CMD_PLUGIN_GET_META_GLOBAL for a description of
+ * administrative groups.
+ */
+#define MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_OFST 20
+#define MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_LEN 2
+/* enum: Load the extension from any ADMIN_GROUP. */
+#define MC_CMD_PLUGIN_ALLOC_IN_ANY 0xffff
+/* Reserved */
+#define MC_CMD_PLUGIN_ALLOC_IN_RESERVED_OFST 22
+#define MC_CMD_PLUGIN_ALLOC_IN_RESERVED_LEN 2
+
+/* MC_CMD_PLUGIN_ALLOC_OUT msgresponse */
+#define MC_CMD_PLUGIN_ALLOC_OUT_LEN 4
+/* Unique identifier of this usage */
+#define MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_LEN 4
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_FREE
+ * Delete a handle to a plugin's extension.
+ */
+#define MC_CMD_PLUGIN_FREE 0x1ae
+#define MC_CMD_PLUGIN_FREE_MSGSET 0x1ae
+#undef MC_CMD_0x1ae_PRIVILEGE_CTG
+
+#define MC_CMD_0x1ae_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_FREE_IN msgrequest */
+#define MC_CMD_PLUGIN_FREE_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_FREE_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_FREE_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_FREE_OUT msgresponse */
+#define MC_CMD_PLUGIN_FREE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_GLOBAL
+ * Returns the global metadata applying to the whole plugin extension. See the
+ * other metadata calls for subtypes of data.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL 0x1af
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_MSGSET 0x1af
+#undef MC_CMD_0x1af_PRIVILEGE_CTG
+
+#define MC_CMD_0x1af_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_LEN 36
+/* Unique identifier of this plugin extension. This is identical to the value
+ * which was requested when the handle was allocated.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_OFST 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_LEN 16
+/* semver sub-version of this plugin extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_OFST 16
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_LEN 2
+/* semver micro-version of this plugin extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_OFST 18
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_LEN 2
+/* Number of different messages which can be sent to this extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_OFST 20
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_LEN 4
+/* Byte offset within the VI window of the plugin's mapped CSR window. */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_OFST 24
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_LEN 2
+/* Number of bytes mapped through to the plugin's CSRs. 0 if that feature was
+ * not requested by the plugin (in which case MAPPED_CSR_OFFSET and
+ * MAPPED_CSR_FLAGS are ignored).
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_OFST 26
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_LEN 2
+/* Flags indicating how to perform the CSR window mapping. */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_LBN 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_WIDTH 1
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_LBN 1
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_WIDTH 1
+/* Identifier of the set of extensions which all change state together.
+ * Extensions having the same ADMIN_GROUP will always load and unload at the
+ * same time. ADMIN_GROUP values themselves are arbitrary (but they contain a
+ * generation number as an implementation detail to ensure that they're not
+ * reused rapidly).
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_OFST 32
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_LEN 1
+/* Bitshift in MC_CMD_DEVEL_CLIENT_PRIVILEGE_MODIFY's MASK parameters
+ * corresponding to this extension, i.e. set the bit 1<<PRIVILEGE_BIT to permit
+ * access to this extension.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_OFST 33
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_LEN 1
+/* Reserved */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_OFST 34
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_LEN 2
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER
+ * Returns metadata supplied by the plugin author which describes this
+ * extension in a human-readable way. Contrast with
+ * MC_CMD_PLUGIN_GET_META_GLOBAL, which returns information needed for software
+ * to operate.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER 0x1b0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_MSGSET 0x1b0
+#undef MC_CMD_0x1b0_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b0_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_LEN 12
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_LEN 4
+/* Category of data to return */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_OFST 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_LEN 4
+/* enum: Top-level information about the extension. The returned data is an
+ * array of key/value pairs using the keys in RFC5013 (Dublin Core) to describe
+ * the extension. The data is a back-to-back list of zero-terminated strings;
+ * the even-numbered fields (0,2,4,...) are keys and their following odd-
+ * numbered fields are the corresponding values. Both keys and values are
+ * nominally UTF-8. Per RFC5013, the same key may be repeated any number of
+ * times. Note that all information (including the key/value structure itself
+ * and the UTF-8 encoding) may have been provided by the plugin author, so
+ * callers must be cautious about parsing it. Callers should parse only the
+ * top-level structure to separate out the keys and values; the contents of the
+ * values is not expected to be machine-readable.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_EXTENSION_KVS 0x0
+/* Byte position of the data to be returned within the full data block of the
+ * given SUBTYPE.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_OFST 8
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMIN 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX 252
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LEN(num) (4+1*(num))
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_NUM(len) (((len)-4)/1)
+/* Full length of the data block of the requested SUBTYPE, in bytes. */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_LEN 4
+/* The information requested by SUBTYPE. */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_OFST 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_LEN 1
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM 248
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM_MCDI2 1016
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_MSG
+ * Returns the simple metadata for a specific plugin request message. This
+ * supplies information necessary for the host to know how to build an
+ * MC_CMD_PLUGIN_REQ request.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG 0x1b1
+#define MC_CMD_PLUGIN_GET_META_MSG_MSGSET 0x1b1
+#undef MC_CMD_0x1b1_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b1_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_MSG_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_LEN 8
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_LEN 4
+/* Unique message ID to obtain */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_ID_OFST 4
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_ID_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_MSG_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_LEN 44
+/* Unique message ID. This is the same value as the input parameter; it exists
+ * to allow future MCDI extensions which enumerate all messages.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_OFST 0
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_LEN 4
+/* Packed index number of this message, assigned by the MC to give each message
+ * a unique ID in an array to allow for more efficient storage/management.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_OFST 4
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_LEN 4
+/* Short human-readable codename for this message. This is conventionally
+ * formatted as a C identifier in the basic ASCII character set with any spare
+ * bytes at the end set to 0, however this convention is not enforced by the MC
+ * so consumers must check for all potential malformations before using it for
+ * a trusted purpose.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_OFST 8
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_LEN 32
+/* Number of bytes of data which must be passed from the host kernel to the MC
+ * for this message's payload, and which are passed back again in the response.
+ * The MC's plugin metadata loader will have validated that the number of bytes
+ * specified here will fit in to MC_CMD_PLUGIN_REQ_IN_DATA in a single MCDI
+ * message.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_OFST 40
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_LEN 4
+
/* PLUGIN_EXTENSION structuredef: Used within MC_CMD_PLUGIN_GET_ALL to describe
* an individual extension.
*/
@@ -26561,6 +27173,100 @@
#define PLUGIN_EXTENSION_RESERVED_LBN 137
#define PLUGIN_EXTENSION_RESERVED_WIDTH 23
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_ALL
+ * Returns a list of all plugin extensions currently loaded and available. The
+ * UUIDs returned can be passed to MC_CMD_PLUGIN_ALLOC in order to obtain more
+ * detailed metadata via the MC_CMD_PLUGIN_GET_META_* family of requests. The
+ * ADMIN_GROUP field collects how extensions are grouped in to units which are
+ * loaded/unloaded together; extensions with the same value are in the same
+ * group.
+ */
+#define MC_CMD_PLUGIN_GET_ALL 0x1b2
+#define MC_CMD_PLUGIN_GET_ALL_MSGSET 0x1b2
+#undef MC_CMD_0x1b2_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b2_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_ALL_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_ALL_IN_LEN 4
+/* Additional options for querying. Note that if neither FLAG_INCLUDE_ENABLED
+ * nor FLAG_INCLUDE_DISABLED are specified then the result set will be empty.
+ */
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_LBN 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_WIDTH 1
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_LBN 1
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_WIDTH 1
+
+/* MC_CMD_PLUGIN_GET_ALL_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMIN 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX 240
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LEN(num) (0+20*(num))
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_NUM(len) (((len)-0)/20)
+/* The list of available plugin extensions, as an array of PLUGIN_EXTENSION
+ * structs.
+ */
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_LEN 20
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MINNUM 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM 12
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM_MCDI2 51
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_REQ
+ * Send a command to a plugin. A plugin may define an arbitrary number of
+ * 'messages' which it allows applications on the host system to send, each
+ * identified by a 32-bit ID.
+ */
+#define MC_CMD_PLUGIN_REQ 0x1b3
+#define MC_CMD_PLUGIN_REQ_MSGSET 0x1b3
+#undef MC_CMD_0x1b3_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_REQ_IN msgrequest */
+#define MC_CMD_PLUGIN_REQ_IN_LENMIN 8
+#define MC_CMD_PLUGIN_REQ_IN_LENMAX 252
+#define MC_CMD_PLUGIN_REQ_IN_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_REQ_IN_LEN(num) (8+1*(num))
+#define MC_CMD_PLUGIN_REQ_IN_DATA_NUM(len) (((len)-8)/1)
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_REQ_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_REQ_IN_HANDLE_LEN 4
+/* Message ID defined by the plugin author */
+#define MC_CMD_PLUGIN_REQ_IN_ID_OFST 4
+#define MC_CMD_PLUGIN_REQ_IN_ID_LEN 4
+/* Data blob being the parameter to the message. This must be of the length
+ * specified by MC_CMD_PLUGIN_GET_META_MSG_IN_MCDI_PARAM_SIZE.
+ */
+#define MC_CMD_PLUGIN_REQ_IN_DATA_OFST 8
+#define MC_CMD_PLUGIN_REQ_IN_DATA_LEN 1
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM 244
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM_MCDI2 1012
+
+/* MC_CMD_PLUGIN_REQ_OUT msgresponse */
+#define MC_CMD_PLUGIN_REQ_OUT_LENMIN 0
+#define MC_CMD_PLUGIN_REQ_OUT_LENMAX 252
+#define MC_CMD_PLUGIN_REQ_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_REQ_OUT_LEN(num) (0+1*(num))
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_NUM(len) (((len)-0)/1)
+/* The input data, as transformed and/or updated by the plugin's eBPF. Will be
+ * the same size as the input DATA parameter.
+ */
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_OFST 0
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_LEN 1
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM 252
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM_MCDI2 1020
+
/* DESC_ADDR_REGION structuredef: Describes a contiguous region of DESC_ADDR
* space that maps to a contiguous region of TRGT_ADDR space. Addresses
* DESC_ADDR in the range [DESC_ADDR_BASE:DESC_ADDR_BASE + 1 <<
@@ -27219,6 +27925,38 @@
#define MC_CMD_VIRTIO_TEST_FEATURES_OUT_LEN 0
+/***********************************/
+/* MC_CMD_VIRTIO_GET_CAPABILITIES
+ * Get virtio capabilities supported by the device. Returns general virtio
+ * capabilities and limitations of the hardware / firmware implementation
+ * (hardware device as a whole), rather than that of individual configured
+ * virtio devices. At present, only the absolute maximum number of queues
+ * allowed on multi-queue devices is returned. Response is expected to be
+ * extended as necessary in the future.
+ */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES 0x1d3
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_MSGSET 0x1d3
+#undef MC_CMD_0x1d3_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_IN msgrequest */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_LEN 4
+/* Type of device to get capabilities for. Matches the device id as defined by
+ * the virtio spec.
+ */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_OFST 0
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_LEN 4
+/* Enum values, see field(s): */
+/* MC_CMD_VIRTIO_GET_FEATURES/MC_CMD_VIRTIO_GET_FEATURES_IN/DEVICE_ID */
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_OUT msgresponse */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_LEN 4
+/* Maximum number of queues supported for a single device instance */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_OFST 0
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_LEN 4
+
+
/***********************************/
/* MC_CMD_VIRTIO_INIT_QUEUE
* Create a virtio virtqueue. Fails with EALREADY if the queue already exists.
@@ -27490,6 +28228,24 @@
#define PCIE_FUNCTION_INTF_LBN 32
#define PCIE_FUNCTION_INTF_WIDTH 32
+/* QUEUE_ID structuredef: Structure representing an absolute queue identifier
+ * (absolute VI number + VI relative queue number). On Keystone, a VI can
+ * contain multiple queues (at present, up to 2), each with separate controls
+ * for direction. This structure is required to uniquely identify the absolute
+ * source queue for descriptor proxy functions.
+ */
+#define QUEUE_ID_LEN 4
+/* Absolute VI number */
+#define QUEUE_ID_ABS_VI_OFST 0
+#define QUEUE_ID_ABS_VI_LEN 2
+#define QUEUE_ID_ABS_VI_LBN 0
+#define QUEUE_ID_ABS_VI_WIDTH 16
+/* Relative queue number within the VI */
+#define QUEUE_ID_REL_QUEUE_LBN 16
+#define QUEUE_ID_REL_QUEUE_WIDTH 1
+#define QUEUE_ID_RESERVED_LBN 17
+#define QUEUE_ID_RESERVED_WIDTH 15
+
/***********************************/
/* MC_CMD_DESC_PROXY_FUNC_CREATE
@@ -28088,7 +28844,11 @@
* Enable descriptor proxying for function into target event queue. Returns VI
* allocation info for the proxy source function, so that the caller can map
* absolute VI IDs from descriptor proxy events back to the originating
- * function.
+ * function. This is a legacy function that only supports single queue proxy
+ * devices. It is also limited in that it can only be called after host driver
+ * attach (once VI allocation is known) and will return MC_CMD_ERR_ENOTCONN
+ * otherwise. For new code, see MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE which
+ * supports multi-queue devices and has no dependency on host driver attach.
*/
#define MC_CMD_DESC_PROXY_FUNC_ENABLE 0x178
#define MC_CMD_DESC_PROXY_FUNC_ENABLE_MSGSET 0x178
@@ -28119,9 +28879,46 @@
#define MC_CMD_DESC_PROXY_FUNC_ENABLE_OUT_VI_BASE_LEN 4
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE
+ * Enable descriptor proxying for a source queue on a host function into target
+ * event queue. Source queue number is a relative virtqueue number on the
+ * source function (0 to max_virtqueues-1). For a multi-queue device, the
+ * caller must enable all source queues individually. To retrieve absolute VI
+ * information for the source function (so that VI IDs from descriptor proxy
+ * events can be mapped back to source function / queue) see
+ * MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE 0x1d0
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_MSGSET 0x1d0
+#undef MC_CMD_0x1d0_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d0_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN msgrequest */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_LEN 12
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to enable proxying on */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+/* Descriptor proxy sink queue (caller function relative). Must be extended
+ * width event queue
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_OFST 8
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT_LEN 0
+
+
/***********************************/
/* MC_CMD_DESC_PROXY_FUNC_DISABLE
- * Disable descriptor proxying for function
+ * Disable descriptor proxying for function. For multi-queue functions,
+ * disables all queues.
*/
#define MC_CMD_DESC_PROXY_FUNC_DISABLE 0x179
#define MC_CMD_DESC_PROXY_FUNC_DISABLE_MSGSET 0x179
@@ -28141,6 +28938,77 @@
#define MC_CMD_DESC_PROXY_FUNC_DISABLE_OUT_LEN 0
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE
+ * Disable descriptor proxying for a specific source queue on a function.
+ */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE 0x1d1
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_MSGSET 0x1d1
+#undef MC_CMD_0x1d1_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d1_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN msgrequest */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_LEN 8
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to disable proxying on */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_DESC_PROXY_GET_VI_INFO
+ * Returns absolute VI allocation information for the descriptor proxy source
+ * function referenced by HANDLE, so that the caller can map absolute VI IDs
+ * from descriptor proxy events back to the originating function and queue. The
+ * call is only valid after the host driver for the source function has
+ * attached (after receiving a driver attach event for the descriptor proxy
+ * function) and will fail with ENOTCONN otherwise.
+ */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO 0x1d2
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_MSGSET 0x1d2
+#undef MC_CMD_0x1d2_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d2_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_GET_VI_INFO_IN msgrequest */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_LEN 4
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMIN 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX 252
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LEN(num) (0+4*(num))
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_NUM(len) (((len)-0)/4)
+/* VI information (VI ID + VI relative queue number) for each of the source
+ * queues (in order from 0 to max_virtqueues-1), as array of QUEUE_ID
+ * structures.
+ */
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_LEN 4
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MINNUM 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM 63
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM_MCDI2 255
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_LEN 2
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_LBN 16
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_WIDTH 1
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_LBN 17
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_WIDTH 15
+
+
/***********************************/
/* MC_CMD_GET_ADDR_SPC_ID
* Get Address space identifier for use in mem2mem descriptors for a given
@@ -29384,9 +30252,12 @@
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_OFST 4
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_LBN 3
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
-/* The total number of counters available to allocate. */
+/* Deprecated alias for AR_COUNTERS. */
#define MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_OFST 8
#define MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_LEN 4
/* The total number of counters lists available to allocate. A value of zero
* indicates that counter lists are not supported by the NIC. (But single
* counters may still be.)
@@ -29429,6 +30300,87 @@
#define MC_CMD_MAE_GET_CAPS_OUT_API_VER_OFST 48
#define MC_CMD_MAE_GET_CAPS_OUT_API_VER_LEN 4
+/* MC_CMD_MAE_GET_CAPS_V2_OUT msgresponse */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_LEN 60
+/* The number of field IDs that the NIC supports. Any field with a ID greater
+ * than or equal to the value returned in this field must be treated as having
+ * a support level of MAE_FIELD_UNSUPPORTED in all requests.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_OFST 0
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_LEN 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+/* Deprecated alias for AR_COUNTERS. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_LEN 4
+/* The total number of counters lists available to allocate. A value of zero
+ * indicates that counter lists are not supported by the NIC. (But single
+ * counters may still be.)
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_OFST 12
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_LEN 4
+/* The total number of encap header structures available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_OFST 16
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_LEN 4
+/* Reserved. Should be zero. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_OFST 20
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_LEN 4
+/* The total number of action sets available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_OFST 24
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_LEN 4
+/* The total number of action set lists available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_OFST 28
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_LEN 4
+/* The total number of outer rules available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_OFST 32
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_LEN 4
+/* The total number of action rules available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_OFST 36
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_LEN 4
+/* The number of priorities available for ACTION_RULE filters. It is invalid to
+ * install a MATCH_ACTION filter with a priority number >= ACTION_PRIOS.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_OFST 40
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_LEN 4
+/* The number of priorities available for OUTER_RULE filters. It is invalid to
+ * install an OUTER_RULE filter with a priority number >= OUTER_PRIOS.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_OFST 44
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_LEN 4
+/* MAE API major version. Currently 1. If this field is not present in the
+ * response (i.e. response shorter than 384 bits), then its value is zero. If
+ * the value does not match the client's expectations, the client should raise
+ * a fatal error.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_OFST 48
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_LEN 4
+/* Mask of supported counter types. Each bit position corresponds to a value of
+ * the MAE_COUNTER_TYPE enum. If this field is missing (i.e. V1 response),
+ * clients must assume that only AR counters are supported (i.e.
+ * COUNTER_TYPES_SUPPORTED==0x1). See also
+ * MC_CMD_MAE_COUNTERS_STREAM_START/COUNTER_TYPES_MASK.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_OFST 52
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_LEN 4
+/* The total number of conntrack counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_OFST 56
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_LEN 4
+
/***********************************/
/* MC_CMD_MAE_GET_AR_CAPS
@@ -29495,8 +30447,8 @@
/***********************************/
/* MC_CMD_MAE_COUNTER_ALLOC
- * Allocate match-action-engine counters, which can be referenced in Action
- * Rules.
+ * Allocate match-action-engine counters, which can be referenced in various
+ * tables.
*/
#define MC_CMD_MAE_COUNTER_ALLOC 0x143
#define MC_CMD_MAE_COUNTER_ALLOC_MSGSET 0x143
@@ -29504,12 +30456,25 @@
#define MC_CMD_0x143_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
#define MC_CMD_MAE_COUNTER_ALLOC_IN_LEN 4
/* The number of counters that the driver would like allocated */
#define MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_LEN 4
+/* MC_CMD_MAE_COUNTER_ALLOC_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_LEN 8
+/* The number of counters that the driver would like allocated */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_LEN 4
+/* Which type of counter to allocate. */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_OFST 4
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_LEN 4
+/* Enum values, see field(s): */
+/* MAE_COUNTER_TYPE */
+
/* MC_CMD_MAE_COUNTER_ALLOC_OUT msgresponse */
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN 12
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX 252
@@ -29518,7 +30483,8 @@
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NUM(len) (((len)-8)/4)
/* Generation count. Packets with generation count >= GENERATION_COUNT will
* contain valid counter values for counter IDs allocated in this call, unless
- * the counter values are zero and zero squash is enabled.
+ * the counter values are zero and zero squash is enabled. Note that there is
+ * an independent GENERATION_COUNT object per counter type.
*/
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_LEN 4
@@ -29548,7 +30514,9 @@
#define MC_CMD_0x144_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMIN 8
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMAX 132
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2 132
@@ -29564,6 +30532,23 @@
#define MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM 32
#define MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* MC_CMD_MAE_COUNTER_FREE_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_LEN 136
+/* The number of counter IDs to be freed. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_LEN 4
+/* An array containing the counter IDs to be freed. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_OFST 4
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_LEN 4
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MINNUM 1
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM 32
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* Which type of counter to free. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_OFST 132
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_LEN 4
+/* Enum values, see field(s): */
+/* MAE_COUNTER_TYPE */
+
/* MC_CMD_MAE_COUNTER_FREE_OUT msgresponse */
#define MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN 12
#define MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX 136
@@ -29572,11 +30557,13 @@
#define MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_NUM(len) (((len)-8)/4)
/* Generation count. A packet with generation count == GENERATION_COUNT will
* contain the final values for these counter IDs, unless the counter values
- * are zero and zero squash is enabled. Receiving a packet with generation
- * count > GENERATION_COUNT guarantees that no more values will be written for
- * these counters. If values for these counter IDs are present, the counter ID
- * has been reallocated. A counter ID will not be reallocated within a single
- * read cycle as this would merge increments from the 'old' and 'new' counters.
+ * are zero and zero squash is enabled. Note that the GENERATION_COUNT value is
+ * specific to the COUNTER_TYPE (IDENTIFIER field in packet header). Receiving
+ * a packet with generation count > GENERATION_COUNT guarantees that no more
+ * values will be written for these counters. If values for these counter IDs
+ * are present, the counter ID has been reallocated. A counter ID will not be
+ * reallocated within a single read cycle as this would merge increments from
+ * the 'old' and 'new' counters.
*/
#define MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_LEN 4
@@ -29616,7 +30603,9 @@
#define MC_CMD_0x151_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest */
+/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest: Using V1 is equivalent to V2
+ * with COUNTER_TYPES_MASK=0x1 (i.e. AR counters only).
+ */
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN 8
/* The RxQ to write packets to. */
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_QID_OFST 0
@@ -29634,6 +30623,35 @@
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_LBN 1
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_WIDTH 1
+/* MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_LEN 12
+/* The RxQ to write packets to. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_OFST 0
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_LEN 2
+/* Maximum size in bytes of packets that may be written to the RxQ. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_OFST 2
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_LEN 2
+/* Optional flags. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_LEN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_LBN 0
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_WIDTH 1
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_LBN 1
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_WIDTH 1
+/* Mask of which counter types should be reported. Each bit position
+ * corresponds to a value of the MAE_COUNTER_TYPE enum. For example a value of
+ * 0x3 requests both AR and CT counters. A value of zero is invalid. Counter
+ * types not selected by the mask value won't be included in the stream. If a
+ * client wishes to change which counter types are reported, it must first call
+ * MAE_COUNTERS_STREAM_STOP, then restart it with the new mask value.
+ * Requesting a counter type which isn't supported by firmware (reported in
+ * MC_CMD_MAE_GET_CAPS/COUNTER_TYPES_SUPPORTED) will result in ENOTSUP.
+ */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_OFST 8
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_LEN 4
+
/* MC_CMD_MAE_COUNTERS_STREAM_START_OUT msgresponse */
#define MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN 4
#define MC_CMD_MAE_COUNTERS_STREAM_START_OUT_FLAGS_OFST 0
@@ -29661,14 +30679,32 @@
/* MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT msgresponse */
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN 4
-/* Generation count. The final set of counter values will be written out in
- * packets with count == GENERATION_COUNT. An empty packet with count >
- * GENERATION_COUNT indicates that no more counter values will be written to
- * this stream.
+/* Generation count for AR counters. The final set of AR counter values will be
+ * written out in packets with count == GENERATION_COUNT. An empty packet with
+ * count > GENERATION_COUNT indicates that no more counter values of this type
+ * will be written to this stream.
*/
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_LEN 4
+/* MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT msgresponse */
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMIN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX 32
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX_MCDI2 32
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LEN(num) (0+4*(num))
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_NUM(len) (((len)-0)/4)
+/* Array of generation counts, indexed by MAE_COUNTER_TYPE. Note that since
+ * MAE_COUNTER_TYPE_AR==0, this response is backwards-compatible with V1. The
+ * final set of counter values will be written out in packets with count ==
+ * GENERATION_COUNT. An empty packet with count > GENERATION_COUNT indicates
+ * that no more counter values of this type will be written to this stream.
+ */
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_LEN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MINNUM 1
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM 8
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM_MCDI2 8
+
/***********************************/
/* MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS
@@ -29941,9 +30977,10 @@
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID_LEN 4
/* If a driver only wished to update one counter within this action set, then
* it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
*/
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_OFST 28
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_LEN 4
@@ -30021,9 +31058,10 @@
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_LIST_ID_LEN 4
/* If a driver only wished to update one counter within this action set, then
* it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
*/
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_OFST 28
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_LEN 4
@@ -30352,7 +31390,8 @@
#define MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_LBN 64
#define MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_WIDTH 32
/* Counter ID to increment if DO_CT or DO_RECIRC is set. Must be set to
- * COUNTER_ID_NULL otherwise.
+ * COUNTER_ID_NULL otherwise. Counter ID must have been allocated with
+ * COUNTER_TYPE=AR.
*/
#define MAE_ACTION_RULE_RESPONSE_COUNTER_ID_OFST 12
#define MAE_ACTION_RULE_RESPONSE_COUNTER_ID_LEN 4
@@ -30710,6 +31749,108 @@
#define MAE_MPORT_DESC_VNIC_PLUGIN_TBD_LBN 352
#define MAE_MPORT_DESC_VNIC_PLUGIN_TBD_WIDTH 32
+/* MAE_MPORT_DESC_V2 structuredef */
+#define MAE_MPORT_DESC_V2_LEN 56
+#define MAE_MPORT_DESC_V2_MPORT_ID_OFST 0
+#define MAE_MPORT_DESC_V2_MPORT_ID_LEN 4
+#define MAE_MPORT_DESC_V2_MPORT_ID_LBN 0
+#define MAE_MPORT_DESC_V2_MPORT_ID_WIDTH 32
+/* Reserved for future purposes, contains information independent of caller */
+#define MAE_MPORT_DESC_V2_FLAGS_OFST 4
+#define MAE_MPORT_DESC_V2_FLAGS_LEN 4
+#define MAE_MPORT_DESC_V2_FLAGS_LBN 32
+#define MAE_MPORT_DESC_V2_FLAGS_WIDTH 32
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_OFST 8
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_LEN 4
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_LBN 0
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_WIDTH 1
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_LBN 1
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_WIDTH 1
+#define MAE_MPORT_DESC_V2_CAN_DELETE_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_DELETE_LBN 2
+#define MAE_MPORT_DESC_V2_CAN_DELETE_WIDTH 1
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_OFST 8
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_LBN 3
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_WIDTH 1
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_LBN 64
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_WIDTH 32
+/* Not the ideal name; it's really the type of thing connected to the m-port */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_OFST 12
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_LEN 4
+/* enum: Connected to a MAC... */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_NET_PORT 0x0
+/* enum: Adds metadata and delivers to another m-port */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_ALIAS 0x1
+/* enum: Connected to a VNIC. */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_VNIC 0x2
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_LBN 96
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_WIDTH 32
+/* 128-bit value available to drivers for m-port identification. */
+#define MAE_MPORT_DESC_V2_UUID_OFST 16
+#define MAE_MPORT_DESC_V2_UUID_LEN 16
+#define MAE_MPORT_DESC_V2_UUID_LBN 128
+#define MAE_MPORT_DESC_V2_UUID_WIDTH 128
+/* Big wadge of space reserved for other common properties */
+#define MAE_MPORT_DESC_V2_RESERVED_OFST 32
+#define MAE_MPORT_DESC_V2_RESERVED_LEN 8
+#define MAE_MPORT_DESC_V2_RESERVED_LO_OFST 32
+#define MAE_MPORT_DESC_V2_RESERVED_LO_LEN 4
+#define MAE_MPORT_DESC_V2_RESERVED_LO_LBN 256
+#define MAE_MPORT_DESC_V2_RESERVED_LO_WIDTH 32
+#define MAE_MPORT_DESC_V2_RESERVED_HI_OFST 36
+#define MAE_MPORT_DESC_V2_RESERVED_HI_LEN 4
+#define MAE_MPORT_DESC_V2_RESERVED_HI_LBN 288
+#define MAE_MPORT_DESC_V2_RESERVED_HI_WIDTH 32
+#define MAE_MPORT_DESC_V2_RESERVED_LBN 256
+#define MAE_MPORT_DESC_V2_RESERVED_WIDTH 64
+/* Logical port index. Only valid when type NET Port. */
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_OFST 40
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_LEN 4
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_LBN 320
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_WIDTH 32
+/* The m-port delivered to */
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_OFST 40
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LEN 4
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LBN 320
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_WIDTH 32
+/* The type of thing that owns the VNIC */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_OFST 40
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_FUNCTION 0x1 /* enum */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_PLUGIN 0x2 /* enum */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LBN 320
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_WIDTH 32
+/* The PCIe interface on which the function lives. CJK: We need an enumeration
+ * of interfaces that we extend as new interface (types) appear. This belongs
+ * elsewhere and should be referenced from here
+ */
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_OFST 44
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LBN 352
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_WIDTH 32
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_OFST 48
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LEN 2
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LBN 384
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_WIDTH 16
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_OFST 50
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LEN 2
+/* enum: Indicates that the function is a PF */
+#define MAE_MPORT_DESC_V2_VF_IDX_NULL 0xffff
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LBN 400
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_WIDTH 16
+/* Reserved. Should be ignored for now. */
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_OFST 44
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LBN 352
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_WIDTH 32
+/* A client handle for the VNIC's owner. Only valid for type VNIC. */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_OFST 52
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LBN 416
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_WIDTH 32
+
/***********************************/
/* MC_CMD_MAE_MPORT_ENUMERATE
--
2.30.2
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure
2021-10-11 12:43 2% ` [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure Akhil Goyal
@ 2021-10-11 14:45 0% ` Zhang, Roy Fan
0 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-11 14:45 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
Ciara
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Monday, October 11, 2021 1:43 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH v2 3/5] cryptodev: move inline APIs into separate structure
>
> Move fastpath inline function pointers from rte_cryptodev into a
> separate structure accessed via a flat array.
> The intension is to make rte_cryptodev and related structures private
> to avoid future API/ABI breakages.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> lib/cryptodev/cryptodev_pmd.c | 51
> ++++++++++++++++++++++++++++++
> lib/cryptodev/cryptodev_pmd.h | 11 +++++++
> lib/cryptodev/rte_cryptodev.c | 29 +++++++++++++++++
> lib/cryptodev/rte_cryptodev_core.h | 29 +++++++++++++++++
> lib/cryptodev/version.map | 5 +++
> 5 files changed, 125 insertions(+)
>
> diff --git a/lib/cryptodev/cryptodev_pmd.c
> b/lib/cryptodev/cryptodev_pmd.c
> index 44a70ecb35..4646708045 100644
> --- a/lib/cryptodev/cryptodev_pmd.c
> +++ b/lib/cryptodev/cryptodev_pmd.c
> @@ -4,6 +4,7 @@
>
> #include <sys/queue.h>
>
> +#include <rte_errno.h>
> #include <rte_string_fns.h>
> #include <rte_malloc.h>
>
> @@ -160,3 +161,53 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev
> *cryptodev)
>
When a device is removed - aka when rte_pci_remove() is called
cryptodev_fp_ops_reset() will never be called. This may expose a problem.
Looks like cryptodev_fp_ops_reset() needs to be called here too.
> return 0;
...
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3] ci: update machine meson option to platform
2021-10-04 13:29 4% ` [dpdk-dev] [PATCH v2] " Juraj Linkeš
@ 2021-10-11 13:40 4% ` Juraj Linkeš
0 siblings, 0 replies; 200+ results
From: Juraj Linkeš @ 2021-10-11 13:40 UTC (permalink / raw)
To: thomas, david.marchand, aconole, maicolgabriel; +Cc: dev, Juraj Linkeš
The way we're building DPDK in CI, with -Dmachine=default, has not been
updated when the option got replaced to preserve a backwards-complatible
build call to facilitate ABI verification between DPDK versions. Update
the call to use -Dplatform=generic, which is the most up to date way to
execute the same build which is now present in all DPDK versions the ABI
check verifies.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
v3: ci retest
---
.ci/linux-build.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 91e43a975b..06aaa79100 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -77,7 +77,7 @@ else
OPTS="$OPTS -Dexamples=all"
fi
-OPTS="$OPTS -Dmachine=default"
+OPTS="$OPTS -Dplatform=generic"
OPTS="$OPTS --default-library=$DEF_LIB"
OPTS="$OPTS --buildtype=debugoptimized"
OPTS="$OPTS -Dcheck_includes=true"
--
2.20.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v8] ethdev: fix representor port ID search by name
` (2 preceding siblings ...)
2021-10-11 12:30 4% ` [dpdk-dev] [PATCH v7] " Andrew Rybchenko
@ 2021-10-11 12:53 4% ` Andrew Rybchenko
3 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 12:53 UTC (permalink / raw)
To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
Cc: dev, Viacheslav Galaktionov, Xueming Li
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
The patch is required for all PMDs which do not provide representors
info on the representor itself.
The function, rte_eth_representor_id_get(), is used in
eth_representor_cmp() which is required in ethdev class iterator to
search ethdev port ID by name (representor case). Before the patch
the function is called on the representor itself and tries to get
representors info to match.
Search of port ID by name is used after hotplug to find out port ID
of the just plugged device.
Getting a list of representors from a representor does not make sense.
Instead, a backer device should be used.
To this end, extend the rte_eth_dev_data structure to include the port ID
of the backing device for representors.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].
It should not be a problem anyway since 21.11 is a ABI breaking release.
Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new backer_port_id field in
rte_eth_dev_data structure. Get ID by name will not work.
mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
v8:
- restore lost description improvements
v7:
- use dpdk_dev in net/mlx5 as suggested by Viacheslav O.
v6:
- provide more information in the changeset description
v5:
- try to improve name: backer_port_id instead of parent_port_id
- init new field to RTE_MAX_ETHPORTS on allocation to avoid
zero port usage by default
v4:
- apply mlx5 review notes: remove fallback from generic ethdev
code and add fallback to mlx5 code to handle legacy usecase
v3:
- fix mlx5 build breakage
v2:
- fix mlx5 review notes
- try device port ID first before parent in order to address
backward compatibility issue
drivers/net/bnxt/bnxt_reps.c | 1 +
drivers/net/enic/enic_vf_representor.c | 1 +
drivers/net/i40e/i40e_vf_representor.c | 1 +
drivers/net/ice/ice_dcf_vf_representor.c | 1 +
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/linux/mlx5_os.c | 13 +++++++++++++
drivers/net/mlx5/windows/mlx5_os.c | 13 +++++++++++++
lib/ethdev/ethdev_driver.h | 6 +++---
lib/ethdev/rte_class_eth.c | 2 +-
lib/ethdev/rte_ethdev.c | 9 +++++----
lib/ethdev/rte_ethdev_core.h | 6 ++++++
11 files changed, 46 insertions(+), 8 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index df05619c3f..b7e88e013a 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = rep_params->vf_id;
+ eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index cfd02c03cc..1a4411844a 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -666,6 +666,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = vf->vf_id;
+ eth_dev->data->backer_port_id = pf->port_id;
eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
sizeof(struct rte_ether_addr) *
ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..d65b821a01 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->backer_port_id = pf->dev_data->port_id;
/* Setting the number queues allocated to the VF */
ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f91..c5335ac3cc 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -426,6 +426,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
vf_rep_eth_dev->data->representor_id = repr->vf_id;
+ vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..9fa75984fb 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
/* Set representor device ops */
ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 3746057673..3858984f02 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, dpdk_dev) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->backer_port_id = port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS)
+ eth_dev->data->backer_port_id = eth_dev->data->port_id;
}
priv->mp_id.port_id = eth_dev->data->port_id;
strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 26fa927039..9de8adecf4 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, dpdk_dev) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->backer_port_id = port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS)
+ eth_dev->data->backer_port_id = eth_dev->data->port_id;
}
/*
* Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 7ce0f7729a..c4ea735732 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1266,8 +1266,8 @@ struct rte_eth_devargs {
* For backward compatibility, if no representor info, direct
* map legacy VF (no controller and pf).
*
- * @param ethdev
- * Handle of ethdev port.
+ * @param port_id
+ * Port ID of the backing device.
* @param type
* Representor type.
* @param controller
@@ -1284,7 +1284,7 @@ struct rte_eth_devargs {
*/
__rte_internal
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..eda216ced5 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
c = i / (np * nf);
p = (i / nf) % np;
f = i % nf;
- if (rte_eth_representor_id_get(edev,
+ if (rte_eth_representor_id_get(edev->data->backer_port_id,
eth_da.type,
eth_da.nb_mh_controllers == 0 ? -1 :
eth_da.mh_controllers[c],
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b..ed7b43a99f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
eth_dev = eth_dev_get(port_id);
strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
eth_dev->data->port_id = port_id;
+ eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
eth_dev->data->mtu = RTE_ETHER_MTU;
pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
@@ -5915,7 +5916,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
}
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id)
@@ -5931,7 +5932,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
return -EINVAL;
/* Get PMD representor range info. */
- ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+ ret = rte_eth_representor_info_get(port_id, NULL);
if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
controller == -1 && pf == -1) {
/* Direct mapping for legacy VF representor. */
@@ -5946,7 +5947,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
if (info == NULL)
return -ENOMEM;
info->nb_ranges_alloc = n;
- ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+ ret = rte_eth_representor_info_get(port_id, info);
if (ret < 0)
goto out;
@@ -5965,7 +5966,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
continue;
if (info->ranges[i].id_end < info->ranges[i].id_base) {
RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
- ethdev->data->port_id, info->ranges[i].id_base,
+ port_id, info->ranges[i].id_base,
info->ranges[i].id_end, i);
continue;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..66ad8b13c8 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,12 @@ struct rte_eth_dev_data {
/**< Switch-specific identifier.
* Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
*/
+ uint16_t backer_port_id;
+ /**< Port ID of the backing device.
+ * This device will be used to query representor
+ * info and calculate representor IDs.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
--
2.30.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array
2021-10-11 12:43 2% ` [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure Akhil Goyal
@ 2021-10-11 12:43 3% ` Akhil Goyal
2021-10-11 14:54 0% ` Zhang, Roy Fan
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-11 12:43 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Akhil Goyal
Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index ce0dca72be..739ad529e5 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -1832,13 +1832,18 @@ static inline uint16_t
rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_crypto_op **ops, uint16_t nb_ops)
{
- struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+ struct rte_crypto_fp_ops *fp_ops;
+ void *qp;
rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
- nb_ops = (*dev->dequeue_burst)
- (dev->data->queue_pairs[qp_id], ops, nb_ops);
+
+ fp_ops = &rte_crypto_fp_ops[dev_id];
+ qp = fp_ops->qp.data[qp_id];
+
+ nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
+
#ifdef RTE_CRYPTO_CALLBACKS
- if (unlikely(dev->deq_cbs != NULL)) {
+ if (unlikely(fp_ops->qp.deq_cb != NULL)) {
struct rte_cryptodev_cb_rcu *list;
struct rte_cryptodev_cb *cb;
@@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
* cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
* not required.
*/
- list = &dev->deq_cbs[qp_id];
+ list = (struct rte_cryptodev_cb_rcu *)&fp_ops->qp.deq_cb[qp_id];
rte_rcu_qsbr_thread_online(list->qsbr, 0);
cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
@@ -1899,10 +1904,13 @@ static inline uint16_t
rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_crypto_op **ops, uint16_t nb_ops)
{
- struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+ struct rte_crypto_fp_ops *fp_ops;
+ void *qp;
+ fp_ops = &rte_crypto_fp_ops[dev_id];
+ qp = fp_ops->qp.data[qp_id];
#ifdef RTE_CRYPTO_CALLBACKS
- if (unlikely(dev->enq_cbs != NULL)) {
+ if (unlikely(fp_ops->qp.enq_cb != NULL)) {
struct rte_cryptodev_cb_rcu *list;
struct rte_cryptodev_cb *cb;
@@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
* cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
* not required.
*/
- list = &dev->enq_cbs[qp_id];
+ list = (struct rte_cryptodev_cb_rcu *)&fp_ops->qp.enq_cb[qp_id];
rte_rcu_qsbr_thread_online(list->qsbr, 0);
cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
@@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
#endif
rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
- return (*dev->enqueue_burst)(
- dev->data->queue_pairs[qp_id], ops, nb_ops);
+ return fp_ops->enqueue_burst(qp, ops, nb_ops);
}
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure
@ 2021-10-11 12:43 2% ` Akhil Goyal
2021-10-11 14:45 0% ` Zhang, Roy Fan
2021-10-11 12:43 3% ` [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array Akhil Goyal
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-11 12:43 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Akhil Goyal
Move fastpath inline function pointers from rte_cryptodev into a
separate structure accessed via a flat array.
The intension is to make rte_cryptodev and related structures private
to avoid future API/ABI breakages.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/cryptodev/cryptodev_pmd.c | 51 ++++++++++++++++++++++++++++++
lib/cryptodev/cryptodev_pmd.h | 11 +++++++
lib/cryptodev/rte_cryptodev.c | 29 +++++++++++++++++
lib/cryptodev/rte_cryptodev_core.h | 29 +++++++++++++++++
lib/cryptodev/version.map | 5 +++
5 files changed, 125 insertions(+)
diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index 44a70ecb35..4646708045 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -4,6 +4,7 @@
#include <sys/queue.h>
+#include <rte_errno.h>
#include <rte_string_fns.h>
#include <rte_malloc.h>
@@ -160,3 +161,53 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
return 0;
}
+
+static uint16_t
+dummy_crypto_enqueue_burst(__rte_unused void *qp,
+ __rte_unused struct rte_crypto_op **ops,
+ __rte_unused uint16_t nb_ops)
+{
+ CDEV_LOG_ERR(
+ "crypto enqueue burst requested for unconfigured device");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+static uint16_t
+dummy_crypto_dequeue_burst(__rte_unused void *qp,
+ __rte_unused struct rte_crypto_op **ops,
+ __rte_unused uint16_t nb_ops)
+{
+ CDEV_LOG_ERR(
+ "crypto dequeue burst requested for unconfigured device");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
+{
+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+ static const struct rte_crypto_fp_ops dummy = {
+ .enqueue_burst = dummy_crypto_enqueue_burst,
+ .dequeue_burst = dummy_crypto_dequeue_burst,
+ .qp = {
+ .data = dummy_data,
+ .enq_cb = dummy_data,
+ .deq_cb = dummy_data,
+ },
+ };
+
+ *fp_ops = dummy;
+}
+
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+ const struct rte_cryptodev *dev)
+{
+ fp_ops->enqueue_burst = dev->enqueue_burst;
+ fp_ops->dequeue_burst = dev->dequeue_burst;
+ fp_ops->qp.data = dev->data->queue_pairs;
+ fp_ops->qp.enq_cb = (void **)(uintptr_t)dev->enq_cbs;
+ fp_ops->qp.deq_cb = (void **)(uintptr_t)dev->deq_cbs;
+}
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 36606dd10b..a71edbb991 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -516,6 +516,17 @@ RTE_INIT(init_ ##driver_id)\
driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
}
+/* Reset crypto device fastpath APIs to dummy values. */
+__rte_internal
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops);
+
+/* Setup crypto device fastpath APIs. */
+__rte_internal
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+ const struct rte_cryptodev *dev);
+
static inline void *
get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess,
uint8_t driver_id) {
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index eb86e629aa..2378892d40 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -53,6 +53,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
.nb_devs = 0
};
+/* Public fastpath APIs. */
+struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
/* spinlock for crypto device callbacks */
static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
@@ -903,6 +906,16 @@ rte_cryptodev_pmd_allocate(const char *name, int socket_id)
cryptodev_globals.nb_devs++;
}
+ /*
+ * for secondary process, at that point we expect device
+ * to be already 'usable', so shared data and all function
+ * pointers for fast-path devops have to be setup properly
+ * inside rte_cryptodev.
+ */
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ cryptodev_fp_ops_set(rte_crypto_fp_ops +
+ cryptodev->data->dev_id, cryptodev);
+
return cryptodev;
}
@@ -917,6 +930,8 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
dev_id = cryptodev->data->dev_id;
+ cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
/* Close device only if device operations have been set */
if (cryptodev->dev_ops) {
ret = rte_cryptodev_close(dev_id);
@@ -1080,6 +1095,9 @@ rte_cryptodev_start(uint8_t dev_id)
}
diag = (*dev->dev_ops->dev_start)(dev);
+ /* expose selection of PMD fast-path functions */
+ cryptodev_fp_ops_set(rte_crypto_fp_ops + dev_id, dev);
+
rte_cryptodev_trace_start(dev_id, diag);
if (diag == 0)
dev->data->dev_started = 1;
@@ -1109,6 +1127,9 @@ rte_cryptodev_stop(uint8_t dev_id)
return;
}
+ /* point fast-path functions to dummy ones */
+ cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
(*dev->dev_ops->dev_stop)(dev);
rte_cryptodev_trace_stop(dev_id);
dev->data->dev_started = 0;
@@ -2411,3 +2432,11 @@ rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
return nb_drivers++;
}
+
+RTE_INIT(cryptodev_init_fp_ops)
+{
+ uint32_t i;
+
+ for (i = 0; i != RTE_DIM(rte_crypto_fp_ops); i++)
+ cryptodev_fp_ops_reset(rte_crypto_fp_ops + i);
+}
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 1633e55889..bac5f8d984 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -25,6 +25,35 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
struct rte_crypto_op **ops, uint16_t nb_ops);
/**< Enqueue packets for processing on queue pair of a device. */
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path cryptodev inline functions in advance.
+ */
+struct rte_cryptodev_qpdata {
+ /** points to array of internal queue pair data pointers. */
+ void **data;
+ /** points to array of enqueue callback data pointers */
+ void **enq_cb;
+ /** points to array of dequeue callback data pointers */
+ void **deq_cb;
+};
+
+struct rte_crypto_fp_ops {
+ /** PMD enqueue burst function. */
+ enqueue_pkt_burst_t enqueue_burst;
+ /** PMD dequeue burst function. */
+ dequeue_pkt_burst_t dequeue_burst;
+ /** Internal queue pair data pointers. */
+ struct rte_cryptodev_qpdata qp;
+ /** Reserved for future ops. */
+ uintptr_t reserved[4];
+} __rte_cache_aligned;
+
+extern struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
/**
* @internal
* The data part, with no function pointers, associated with each device.
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 43cf937e40..ed62ced221 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -45,6 +45,9 @@ DPDK_22 {
rte_cryptodev_sym_session_init;
rte_cryptodevs;
+ #added in 21.11
+ rte_crypto_fp_ops;
+
local: *;
};
@@ -109,6 +112,8 @@ EXPERIMENTAL {
INTERNAL {
global:
+ cryptodev_fp_ops_reset;
+ cryptodev_fp_ops_set;
rte_cryptodev_allocate_driver;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
--
2.25.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v7] ethdev: fix representor port ID search by name
2021-10-08 9:27 4% ` [dpdk-dev] [PATCH v6] " Andrew Rybchenko
@ 2021-10-11 12:30 4% ` Andrew Rybchenko
2021-10-11 12:53 4% ` [dpdk-dev] [PATCH v8] " Andrew Rybchenko
3 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 12:30 UTC (permalink / raw)
To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
Cc: dev, Viacheslav Galaktionov, Xueming Li
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Getting a list of representors from a representor does not make sense.
Instead, a parent device should be used.
To this end, extend the rte_eth_dev_data structure to include the port ID
of the backing device for representors.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].
It should not be a problem anyway since 21.11 is a ABI breaking release.
Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new backer_port_id field in
rte_eth_dev_data structure. Get ID by name will not work.
mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
v7:
- use dpdk_dev in net/mlx5 as suggested by Viacheslav O.
v6:
- provide more information in the changeset description
v5:
- try to improve name: backer_port_id instead of parent_port_id
- init new field to RTE_MAX_ETHPORTS on allocation to avoid
zero port usage by default
v4:
- apply mlx5 review notes: remove fallback from generic ethdev
code and add fallback to mlx5 code to handle legacy usecase
v3:
- fix mlx5 build breakage
v2:
- fix mlx5 review notes
- try device port ID first before parent in order to address
backward compatibility issue
drivers/net/bnxt/bnxt_reps.c | 1 +
drivers/net/enic/enic_vf_representor.c | 1 +
drivers/net/i40e/i40e_vf_representor.c | 1 +
drivers/net/ice/ice_dcf_vf_representor.c | 1 +
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/linux/mlx5_os.c | 13 +++++++++++++
drivers/net/mlx5/windows/mlx5_os.c | 13 +++++++++++++
lib/ethdev/ethdev_driver.h | 6 +++---
lib/ethdev/rte_class_eth.c | 2 +-
lib/ethdev/rte_ethdev.c | 9 +++++----
lib/ethdev/rte_ethdev_core.h | 6 ++++++
11 files changed, 46 insertions(+), 8 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index df05619c3f..b7e88e013a 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = rep_params->vf_id;
+ eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index cfd02c03cc..1a4411844a 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -666,6 +666,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = vf->vf_id;
+ eth_dev->data->backer_port_id = pf->port_id;
eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
sizeof(struct rte_ether_addr) *
ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..d65b821a01 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->backer_port_id = pf->dev_data->port_id;
/* Setting the number queues allocated to the VF */
ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f91..c5335ac3cc 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -426,6 +426,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
vf_rep_eth_dev->data->representor_id = repr->vf_id;
+ vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..9fa75984fb 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
/* Set representor device ops */
ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 3746057673..3858984f02 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, dpdk_dev) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->backer_port_id = port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS)
+ eth_dev->data->backer_port_id = eth_dev->data->port_id;
}
priv->mp_id.port_id = eth_dev->data->port_id;
strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 26fa927039..9de8adecf4 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, dpdk_dev) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->backer_port_id = port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS)
+ eth_dev->data->backer_port_id = eth_dev->data->port_id;
}
/*
* Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 7ce0f7729a..c4ea735732 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1266,8 +1266,8 @@ struct rte_eth_devargs {
* For backward compatibility, if no representor info, direct
* map legacy VF (no controller and pf).
*
- * @param ethdev
- * Handle of ethdev port.
+ * @param port_id
+ * Port ID of the backing device.
* @param type
* Representor type.
* @param controller
@@ -1284,7 +1284,7 @@ struct rte_eth_devargs {
*/
__rte_internal
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..eda216ced5 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
c = i / (np * nf);
p = (i / nf) % np;
f = i % nf;
- if (rte_eth_representor_id_get(edev,
+ if (rte_eth_representor_id_get(edev->data->backer_port_id,
eth_da.type,
eth_da.nb_mh_controllers == 0 ? -1 :
eth_da.mh_controllers[c],
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b..ed7b43a99f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
eth_dev = eth_dev_get(port_id);
strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
eth_dev->data->port_id = port_id;
+ eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
eth_dev->data->mtu = RTE_ETHER_MTU;
pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
@@ -5915,7 +5916,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
}
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id)
@@ -5931,7 +5932,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
return -EINVAL;
/* Get PMD representor range info. */
- ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+ ret = rte_eth_representor_info_get(port_id, NULL);
if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
controller == -1 && pf == -1) {
/* Direct mapping for legacy VF representor. */
@@ -5946,7 +5947,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
if (info == NULL)
return -ENOMEM;
info->nb_ranges_alloc = n;
- ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+ ret = rte_eth_representor_info_get(port_id, info);
if (ret < 0)
goto out;
@@ -5965,7 +5966,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
continue;
if (info->ranges[i].id_end < info->ranges[i].id_base) {
RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
- ethdev->data->port_id, info->ranges[i].id_base,
+ port_id, info->ranges[i].id_base,
info->ranges[i].id_end, i);
continue;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..66ad8b13c8 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,12 @@ struct rte_eth_dev_data {
/**< Switch-specific identifier.
* Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
*/
+ uint16_t backer_port_id;
+ /**< Port ID of the backing device.
+ * This device will be used to query representor
+ * info and calculate representor IDs.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
--
2.30.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T
2021-10-11 11:29 5% ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-11 11:29 5% ` Radu Nicolau
2021-10-12 10:24 0% ` Ananyev, Konstantin
1 sibling, 1 reply; 200+ results
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
doc/guides/rel_notes/deprecation.rst | 5 ++---
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 7 +++++++
3 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8b7b0beee2..d24d69b669 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -210,9 +210,8 @@ Deprecation Notices
pointer for the private data to the application which can be attached
to the packet while enqueuing.
-* security: The structure ``rte_security_ipsec_xform`` will be extended with
- multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size).
+* security: The structure ``rte_security_ipsec_xform`` will be extended with:
+ new field: IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like IPsec inner
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8ac6632abf..1a29640eea 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -238,6 +238,11 @@ ABI Changes
application to start from an arbitrary ESN value for debug and SA lifetime
enforcement purposes.
+* security: A new structure ``udp`` was added in structure
+ ``rte_security_ipsec_xform`` to allow setting the source and destination ports
+ for UDP encapsulated IPsec traffic.
+
+
Known Issues
------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 371d64647a..b30425e206 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -128,6 +128,11 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -288,6 +293,8 @@ struct rte_security_ipsec_xform {
};
} esn;
/**< Extended Sequence Number */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
};
/**
--
2.25.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform
@ 2021-10-11 11:29 5% ` Radu Nicolau
2021-10-12 10:23 0% ` Ananyev, Konstantin
2021-10-11 11:29 5% ` [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
1 sibling, 1 reply; 200+ results
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
doc/guides/rel_notes/deprecation.rst | 2 +-
doc/guides/rel_notes/release_21_11.rst | 4 ++++
lib/security/rte_security.h | 8 ++++++++
3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index baf15aa722..8b7b0beee2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -212,7 +212,7 @@ Deprecation Notices
* security: The structure ``rte_security_ipsec_xform`` will be extended with
multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
+ IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like IPsec inner
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index c0a7f75518..401c6d453a 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,10 @@ ABI Changes
``rte_security_ipsec_xform`` to allow applications to configure SA soft
and hard expiry limits. Limits can be either in number of packets or bytes.
+* security: A new structure ``esn`` was added in structure
+ ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
+ application to start from an arbitrary ESN value for debug and SA lifetime
+ enforcement purposes.
Known Issues
------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 2013e65e49..371d64647a 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -280,6 +280,14 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH] sort symbols map
2021-10-05 9:16 4% [dpdk-dev] [PATCH] sort symbols map David Marchand
2021-10-05 14:16 0% ` Kinsella, Ray
@ 2021-10-11 11:36 0% ` Dumitrescu, Cristian
1 sibling, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2021-10-11 11:36 UTC (permalink / raw)
To: David Marchand, dev
Cc: thomas, Yigit, Ferruh, Ray Kinsella, Singh, Jasvinder, Medvedkin,
Vladimir, Walsh, Conor, Stephen Hemminger
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, October 5, 2021 10:16 AM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>; Ray
> Kinsella <mdr@ashroe.eu>; Singh, Jasvinder <jasvinder.singh@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>; Walsh, Conor <conor.walsh@intel.com>;
> Stephen Hemminger <stephen@networkplumber.org>
> Subject: [PATCH] sort symbols map
>
> Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
>
> Fixes: e73a7ab22422 ("net/softnic: promote manage API")
> Fixes: 8f532a34c4f2 ("fib: promote API to stable")
> Fixes: 4aeb92396b85 ("rib: promote API to stable")
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> I added "./devtools/update-abi.sh $(cat ABI_VERSION)" to my checks.
>
> I should have caught it when merging fib and rib patches...
> But my eyes (or more likely brain) stopped at net/softnic bits.
>
> What do you think?
> Should I wait a bit more and send a global patch to catch any missed
> sorting just before rc1?
>
> In the meantime, if you merge .map updates, try to remember to run the
> command above.
>
> Thanks.
> ---
> drivers/net/softnic/version.map | 2 +-
> lib/fib/version.map | 21 ++++++++++-----------
> lib/rib/version.map | 33 ++++++++++++++++-----------------
> 3 files changed, 27 insertions(+), 29 deletions(-)
>
> diff --git a/drivers/net/softnic/version.map
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Akhil Goyal
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
@ 2021-10-11 10:46 0% ` Zhang, Roy Fan
2021-10-12 9:55 3% ` Kinsella, Ray
2 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-11 10:46 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
Ciara
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Friday, October 8, 2021 9:45 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
>
> Remove *_LIST_END enumerators from asymmetric crypto
> lib to avoid ABI breakage for every new addition in
> enums.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 1/5] ethdev: update modify field flow action
2021-10-10 23:45 8% ` [dpdk-dev] [PATCH v2 1/5] " Viacheslav Ovsiienko
@ 2021-10-11 9:54 3% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 9:54 UTC (permalink / raw)
To: Viacheslav Ovsiienko, dev
Cc: rasland, matan, shahafs, orika, getelson, thomas
On 10/11/21 2:45 AM, Viacheslav Ovsiienko wrote:
> The generic modify field flow action introduced in [1] has
> some issues related to the immediate source operand:
>
> - immediate source can be presented either as an unsigned
> 64-bit integer or pointer to data pattern in memory.
> There was no explicit pointer field defined in the union.
>
> - the byte ordering for 64-bit integer was not specified.
> Many fields have shorter lengths and byte ordering
> is crucial.
>
> - how the bit offset is applied to the immediate source
> field was not defined and documented.
>
> - 64-bit integer size is not enough to provide IPv6
> addresses.
>
> In order to cover the issues and exclude any ambiguities
> the following is done:
>
> - introduce the explicit pointer field
> in rte_flow_action_modify_data structure
>
> - replace the 64-bit unsigned integer with 16-byte array
>
> - update the modify field flow action documentation
>
> [1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 16 ++++++++++++++++
> doc/guides/rel_notes/release_21_11.rst | 9 +++++++++
> lib/ethdev/rte_flow.h | 17 ++++++++++++++---
> 3 files changed, 39 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 2b42d5ec8c..1ceecb399f 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -2835,6 +2835,22 @@ a packet to any other part of it.
> ``value`` sets an immediate value to be used as a source or points to a
> location of the value in memory. It is used instead of ``level`` and ``offset``
> for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
> +The data in memory should be presented exactly in the same byte order and
> +length as in the relevant flow item, i.e. data for field with type
> +RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field
> +in rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
> +rte_flow_item_ipv6 conventions, and so on. If the field size is large than
large -> larger
> +16 bytes the pattern can be provided as pointer only.
RTE_FLOW_FIELD_MAC_DST, dst, rte_flow_item_eth, RTE_FLOW_FIELD_IPV6_SRC,
rte_flow_item_ipv6 should be ``x``.
> +
> +The bitfield extracted from the memory being applied as second operation
> +parameter is defined by action width and by the destination field offset.
> +Application should provide the data in immediate value memory (either as
> +buffer or by pointer) exactly as item field without any applied explicit offset,
> +and destination packet field (with specified width and bit offset) will be
> +replaced by immediate source bits from the same bit offset. For example,
> +to replace the third byte of MAC address with value 0x85, application should
> +specify destination width as 8, destination width as 16, and provide immediate
destination width twice above
> +value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
>
> .. _table_rte_flow_action_modify_field:
pvalue should be added in the "destination/source field
definition".
dst and src members documentation should be improved to
highlight the difference. Destination cannot be "immediate" and
"pointer". In fact, "pointer" is a kind of "immediate". May be
it is better to use "constant value" instead of "immediate".
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index dfc2cbdeed..41a087d7c1 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -187,6 +187,13 @@ API Changes
> the crypto/security operation. This field will be used to communicate
> events such as soft expiry with IPsec in lookaside mode.
>
> +* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate data
udpdated -> updated
> + array is extended, data pointer field is explicitly added to union, the
> + action behavior is defined in more strict fashion and documentation updated.
> + The immediate value behavior has been changed, the entire immediate field
> + should be provided, and offset for immediate source bitfield is assigned
> + from destination one.
> +
>
> ABI Changes
> -----------
> @@ -222,6 +229,8 @@ ABI Changes
> ``rte_security_ipsec_xform`` to allow applications to configure SA soft
> and hard expiry limits. Limits can be either in number of packets or bytes.
>
> +* ethdev: ``rte_flow_action_modify_data`` structure udpdated.
udpdated -> updated
I'm not sure that it makes sense to duplicate ABI changes if
API is changed.
> +
>
> Known Issues
> ------------
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 7b1ed7f110..953924d42b 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -3204,6 +3204,9 @@ enum rte_flow_field_id {
> };
>
> /**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
Isn't a separate fix to add missing experimental header?
> * Field description for MODIFY_FIELD action.
> */
"Another packet field" in the next paragraph I read as
a field of another packet which sounds confusing.
I guess it is "Another field of the packet" in fact.
I think it would be nice to clarify as well.
> struct rte_flow_action_modify_data {
> @@ -3217,10 +3220,18 @@ struct rte_flow_action_modify_data {
> uint32_t offset;
> };
> /**
> - * Immediate value for RTE_FLOW_FIELD_VALUE or
> - * memory address for RTE_FLOW_FIELD_POINTER.
> + * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
> + * same byte order and length as in relevant rte_flow_item_xxx.
> + * The immediate source bitfield offset is inherited from
> + * the destination's one.
> */
> - uint64_t value;
> + uint8_t value[16];
> + /*
It should be a Doxygen style comment.
> + * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
> + * should be the same as for relevant field in the
> + * rte_flow_item_xxx structure.
> + */
> + void *pvalue;
> };
> };
>
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
` (5 preceding siblings ...)
2021-10-08 18:13 0% ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
@ 2021-10-11 9:22 0% ` Andrew Rybchenko
6 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 9:22 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
ferruh.yigit, mdr, jay.jayatheerthan
Hi Konstantin,
On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> v5 changes:
> - Fix spelling (Thomas/David)
> - Rename internal helper functions (David)
> - Reorder patches and update commit messages (Thomas)
> - Update comments (Thomas)
> - Changed layout in rte_eth_fp_ops, to group functions and
> related data based on their functionality:
> first 64B line for Rx, second one for Tx.
> Didn't observe any real performance difference comparing to
> original layout. Though decided to keep a new one, as it seems
> a bit more plausible.
>
> v4 changes:
> - Fix secondary process attach (Pavan)
> - Fix build failure (Ferruh)
> - Update lib/ethdev/verion.map (Ferruh)
> Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
> section makes checkpatch.sh to complain.
>
> v3 changes:
> - Changes in public struct naming (Jerin/Haiyue)
> - Split patches
> - Update docs
> - Shamelessly included Andrew's patch:
> https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
> into these series.
> I have to do similar thing here, so decided to avoid duplicated effort.
>
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
>
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> related data pointer from rte_eth_dev into a separate flat array.
> We keep it public to still be able to use inline functions for these
> 'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> Note that apart from function pointers itself, each element of this
> flat array also contains two opaque pointers for each ethdev:
> 1) a pointer to an array of internal queue data pointers
> 2) points to array of queue callback data pointers.
> Note that exposing this extra information allows us to avoid extra
> changes inside PMD level, plus should help to avoid possible
> performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
> (rte_eth_rx_burst(), etc.) to use new public flat array.
> While it is an ABI breakage, this change is intended to be transparent
> for both users (no changes in user app is required) and PMD developers
> (no changes in PMD is required).
> One extra note - with new implementation RX/TX callback invocation
> will cost one extra function call with this changes. That might cause
> some slowdown for code-path with RX/TX callbacks heavily involved.
> Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> things into internal header: <ethdev_driver.h>.
>
> That approach was selected to:
> - Avoid(/minimize) possible performance losses.
> - Minimize required changes inside PMDs.
>
> Performance testing results (ICX 2.0GHz, E810 (ice)):
> - testpmd macswap fwd mode, plus
> a) no RX/TX callbacks:
> no actual slowdown observed
> b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> ~2% slowdown
> - l3fwd: no actual slowdown observed
>
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy
> and provide your feedback.
Many thanks for the very good patch series.
I hope we'll make it to be included in 21.11.
If you need any help with cosmetic fixes suggested
by me on review, please, let me know.
Andrew.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
@ 2021-10-11 9:02 0% ` Andrew Rybchenko
2021-10-11 15:47 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-10-11 9:02 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
ferruh.yigit, mdr, jay.jayatheerthan
On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Rework fast-path ethdev functions to use rte_eth_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> One extra thing to note - RX/TX callback invocation will cause extra
> function call with these changes. That might cause some insignificant
> slowdown for code-path where RX/TX callbacks are heavily involved.
I'm sorry for nit picking here and below:
RX -> Rx, TX -> Tx everywhere above.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> lib/ethdev/ethdev_private.c | 31 +++++
> lib/ethdev/rte_ethdev.h | 242 ++++++++++++++++++++++++++----------
> lib/ethdev/version.map | 3 +
> 3 files changed, 208 insertions(+), 68 deletions(-)
>
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 3eeda6e9f9..1222c6f84e 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> fpo->txq.data = dev->data->tx_queues;
> fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> }
> +
> +uint16_t
> +rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> + void *opaque)
> +{
> + const struct rte_eth_rxtx_callback *cb = opaque;
> +
> + while (cb != NULL) {
> + nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> + nb_pkts, cb->param);
> + cb = cb->next;
> + }
> +
> + return nb_rx;
> +}
> +
> +uint16_t
> +rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> +{
> + const struct rte_eth_rxtx_callback *cb = opaque;
> +
> + while (cb != NULL) {
> + nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> + cb->param);
> + cb = cb->next;
> + }
> +
> + return nb_pkts;
> +}
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index cdd16d6e57..c0e1a40681 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
>
> #include <rte_ethdev_core.h>
>
> +/**
> + * @internal
> + * Helper routine for eth driver rx_burst API.
rx -> Rx
> + * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
> + * Does necessary post-processing - invokes RX callbacks if any, etc.
RX -> Rx
> + *
> + * @param port_id
> + * The port identifier of the Ethernet device.
> + * @param queue_id
> + * The index of the receive queue from which to retrieve input packets.
Isn't:
The index of the queue from which packets are received from?
> + * @param rx_pkts
> + * The address of an array of pointers to *rte_mbuf* structures that
> + * have been retrieved from the device.
> + * @param nb_pkts
Should be @param nb_rx
> + * The number of packets that were retrieved from the device.
> + * @param nb_pkts
> + * The number of elements in *rx_pkts* array.
@p should be used to refer to a paramter.
The description does not help to understand why both nb_rx and
nb_pkts are necessary. Isn't nb_pkts >= nb_rx and nb_rx
sufficient?
> + * @param opaque
> + * Opaque pointer of RX queue callback related data.
RX -> Rx
> + *
> + * @return
> + * The number of packets effectively supplied to the *rx_pkts* array.
@p should be used to refer to a parameter.
> + */
> +uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> + void *opaque);
> +
> /**
> *
> * Retrieve a burst of input packets from a receive queue of an Ethernet
> @@ -4995,23 +5022,37 @@ static inline uint16_t
> rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> {
> - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> uint16_t nb_rx;
> + struct rte_eth_fp_ops *p;
p is typically a very bad name in a funcion with
many pointer variables etc. May be "fpo" as in previous
patch?
> + void *cb, *qd;
Please, avoid variable, expecially pointers, declaration in
one line.
I'd suggest to use 'rxq' instead of 'qd'. The first paramter
of the rx_pkt_burst is 'rxq'.
Also 'cb' seems to be used under RTE_ETHDEV_RXTX_CALLBACKS
only. If so, it could be unused variable warning if
RTE_ETHDEV_RXTX_CALLBACKS is not defined.
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> + if (port_id >= RTE_MAX_ETHPORTS ||
> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid port_id=%u or queue_id=%u\n",
> + port_id, queue_id);
> + return 0;
> + }
> +#endif
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[port_id];
> + qd = p->rxq.data[queue_id];
>
> #ifdef RTE_ETHDEV_DEBUG_RX
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
>
> - if (queue_id >= dev->data->nb_rx_queues) {
> - RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
RX -> Rx
> + queue_id, port_id);
> return 0;
> }
> #endif
> - nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
> - rx_pkts, nb_pkts);
> +
> + nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
>
> #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> - struct rte_eth_rxtx_callback *cb;
>
> /* __ATOMIC_RELEASE memory order was used when the
> * call back was inserted into the list.
> @@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
> * not required.
> */
> - cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
> - __ATOMIC_RELAXED);
> -
> - if (unlikely(cb != NULL)) {
> - do {
> - nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> - nb_pkts, cb->param);
> - cb = cb->next;
> - } while (cb != NULL);
> - }
> + cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
> + if (unlikely(cb != NULL))
> + nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
> + nb_rx, nb_pkts, cb);
> #endif
>
> rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
> @@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> static inline int
> rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> {
> - struct rte_eth_dev *dev;
> + struct rte_eth_fp_ops *p;
> + void *qd;
p -> fpo, qd -> rxq
> +
> + if (port_id >= RTE_MAX_ETHPORTS ||
> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid port_id=%u or queue_id=%u\n",
> + port_id, queue_id);
> + return -EINVAL;
> + }
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[port_id];
> + qd = p->rxq.data[queue_id];
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> - dev = &rte_eth_devices[port_id];
> - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
> - if (queue_id >= dev->data->nb_rx_queues ||
> - dev->data->rx_queues[queue_id] == NULL)
> + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
> + if (qd == NULL)
> return -EINVAL;
>
> - return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
> + return (int)(*p->rx_queue_count)(qd);
> }
>
> /**@{@name Rx hardware descriptor states
> @@ -5108,21 +5154,30 @@ static inline int
> rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
> uint16_t offset)
> {
> - struct rte_eth_dev *dev;
> - void *rxq;
> + struct rte_eth_fp_ops *p;
> + void *qd;
p -> fpo, qd -> rxq
>
> #ifdef RTE_ETHDEV_DEBUG_RX
> - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + if (port_id >= RTE_MAX_ETHPORTS ||
> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid port_id=%u or queue_id=%u\n",
> + port_id, queue_id);
> + return -EINVAL;
> + }
> #endif
> - dev = &rte_eth_devices[port_id];
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[port_id];
> + qd = p->rxq.data[queue_id];
> +
> #ifdef RTE_ETHDEV_DEBUG_RX
> - if (queue_id >= dev->data->nb_rx_queues)
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + if (qd == NULL)
> return -ENODEV;
> #endif
> - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
> - rxq = dev->data->rx_queues[queue_id];
> -
> - return (*dev->rx_descriptor_status)(rxq, offset);
> + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
> + return (*p->rx_descriptor_status)(qd, offset);
> }
>
> /**@{@name Tx hardware descriptor states
> @@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
> static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
> uint16_t queue_id, uint16_t offset)
> {
> - struct rte_eth_dev *dev;
> - void *txq;
> + struct rte_eth_fp_ops *p;
> + void *qd;
p -> fpo, qd -> txq
>
> #ifdef RTE_ETHDEV_DEBUG_TX
> - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + if (port_id >= RTE_MAX_ETHPORTS ||
> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid port_id=%u or queue_id=%u\n",
> + port_id, queue_id);
> + return -EINVAL;
> + }
> #endif
> - dev = &rte_eth_devices[port_id];
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[port_id];
> + qd = p->txq.data[queue_id];
> +
> #ifdef RTE_ETHDEV_DEBUG_TX
> - if (queue_id >= dev->data->nb_tx_queues)
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + if (qd == NULL)
> return -ENODEV;
> #endif
> - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
> - txq = dev->data->tx_queues[queue_id];
> -
> - return (*dev->tx_descriptor_status)(txq, offset);
> + RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
> + return (*p->tx_descriptor_status)(qd, offset);
> }
>
> +/**
> + * @internal
> + * Helper routine for eth driver tx_burst API.
> + * Should be called before entry PMD's rte_eth_tx_bulk implementation.
> + * Does necessary pre-processing - invokes TX callbacks if any, etc.
TX -> Tx
> + *
> + * @param port_id
> + * The port identifier of the Ethernet device.
> + * @param queue_id
> + * The index of the transmit queue through which output packets must be
> + * sent.
> + * @param tx_pkts
> + * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
> + * which contain the output packets.
*nb_pkts* -> @p nb_pkts
> + * @param nb_pkts
> + * The maximum number of packets to transmit.
> + * @return
> + * The number of output packets to transmit.
> + */
> +uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
> +
> /**
> * Send a burst of output packets on a transmit queue of an Ethernet device.
> *
> @@ -5256,20 +5342,34 @@ static inline uint16_t
> rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
> struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> {
> - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + struct rte_eth_fp_ops *p;
> + void *cb, *qd;
Same as above
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> + if (port_id >= RTE_MAX_ETHPORTS ||
> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid port_id=%u or queue_id=%u\n",
> + port_id, queue_id);
> + return 0;
> + }
> +#endif
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[port_id];
> + qd = p->txq.data[queue_id];
>
> #ifdef RTE_ETHDEV_DEBUG_TX
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
>
> - if (queue_id >= dev->data->nb_tx_queues) {
> - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
TX -> Tx
> + queue_id, port_id);
> return 0;
> }
> #endif
>
> #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> - struct rte_eth_rxtx_callback *cb;
>
> /* __ATOMIC_RELEASE memory order was used when the
> * call back was inserted into the list.
> @@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
> * not required.
> */
> - cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
> - __ATOMIC_RELAXED);
> -
> - if (unlikely(cb != NULL)) {
> - do {
> - nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> - cb->param);
> - cb = cb->next;
> - } while (cb != NULL);
> - }
> + cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
> + if (unlikely(cb != NULL))
> + nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
> + nb_pkts, cb);
> #endif
>
> - rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
> - nb_pkts);
> - return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
> + nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
> +
> + rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
> + return nb_pkts;
> }
>
> /**
> @@ -5354,31 +5449,42 @@ static inline uint16_t
> rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
> struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> {
> - struct rte_eth_dev *dev;
> + struct rte_eth_fp_ops *p;
> + void *qd;
p->fpo, qd->txq
>
> #ifdef RTE_ETHDEV_DEBUG_TX
> - if (!rte_eth_dev_is_valid_port(port_id)) {
> - RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
> + if (port_id >= RTE_MAX_ETHPORTS ||
> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid port_id=%u or queue_id=%u\n",
> + port_id, queue_id);
> rte_errno = ENODEV;
> return 0;
> }
> #endif
>
> - dev = &rte_eth_devices[port_id];
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[port_id];
> + qd = p->txq.data[queue_id];
>
> #ifdef RTE_ETHDEV_DEBUG_TX
> - if (queue_id >= dev->data->nb_tx_queues) {
> - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> + if (!rte_eth_dev_is_valid_port(port_id)) {
> + RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
TX -> Tx
> + rte_errno = ENODEV;
> + return 0;
> + }
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
TX -> Tx
> + queue_id, port_id);
> rte_errno = EINVAL;
> return 0;
> }
> #endif
>
> - if (!dev->tx_pkt_prepare)
> + if (!p->tx_pkt_prepare)
Please, change it to compare vs NULL since you touch the line.
Just to be consistent with DPDK coding style and lines above.
> return nb_pkts;
>
> - return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
> - tx_pkts, nb_pkts);
> + return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
> }
>
> #else
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index 904bce6ea1..79e62dcf61 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -7,6 +7,8 @@ DPDK_22 {
> rte_eth_allmulticast_disable;
> rte_eth_allmulticast_enable;
> rte_eth_allmulticast_get;
> + rte_eth_call_rx_callbacks;
> + rte_eth_call_tx_callbacks;
> rte_eth_dev_adjust_nb_rx_tx_desc;
> rte_eth_dev_callback_register;
> rte_eth_dev_callback_unregister;
> @@ -76,6 +78,7 @@ DPDK_22 {
> rte_eth_find_next_of;
> rte_eth_find_next_owned_by;
> rte_eth_find_next_sibling;
> + rte_eth_fp_ops;
> rte_eth_iterator_cleanup;
> rte_eth_iterator_init;
> rte_eth_iterator_next;
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
2021-10-09 12:05 0% ` fengchengwen
2021-10-11 1:18 0% ` fengchengwen
@ 2021-10-11 8:35 0% ` Andrew Rybchenko
2021-10-11 15:15 0% ` Ananyev, Konstantin
2 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 8:35 UTC (permalink / raw)
To: fengchengwen, Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
ferruh.yigit, mdr, jay.jayatheerthan
On 10/9/21 3:05 PM, fengchengwen wrote:
> On 2021/10/7 19:27, Konstantin Ananyev wrote:
>> Copy public function pointers (rx_pkt_burst(), etc.) and related
>> pointers to internal data from rte_eth_dev structure into a
>> separate flat array. That array will remain in a public header.
>> The intention here is to make rte_eth_dev and related structures internal.
>> That should allow future possible changes to core eth_dev structures
>> to be transparent to the user and help to avoid ABI/API breakages.
>> The plan is to keep minimal part of data from rte_eth_dev public,
>> so we still can use inline functions for fast-path calls
>> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>> The whole idea beyond this new schema:
>> 1. PMDs keep to setup fast-path function pointers and related data
>> inside rte_eth_dev struct in the same way they did it before.
>> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>> (for secondary process) we call eth_dev_fp_ops_setup, which
>> copies these function and data pointers into rte_eth_fp_ops[port_id].
>> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>> we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>> into some dummy values.
>> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>> flat array to call PMD specific functions.
>> That approach should allow us to make rte_eth_devices[] private
>> without introducing regression and help to avoid changes in drivers code.
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>> lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++
>> lib/ethdev/ethdev_private.h | 7 +++++
>> lib/ethdev/rte_ethdev.c | 27 ++++++++++++++++++
>> lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>> 4 files changed, 141 insertions(+)
>>
>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>> index 012cf73ca2..3eeda6e9f9 100644
>> --- a/lib/ethdev/ethdev_private.c
>> +++ b/lib/ethdev/ethdev_private.c
>> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>> RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>> return str == NULL ? -1 : 0;
>> }
>> +
>> +static uint16_t
>> +dummy_eth_rx_burst(__rte_unused void *rxq,
>> + __rte_unused struct rte_mbuf **rx_pkts,
>> + __rte_unused uint16_t nb_pkts)
>> +{
>> + RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>> + rte_errno = ENOTSUP;
>> + return 0;
>> +}
>> +
>> +static uint16_t
>> +dummy_eth_tx_burst(__rte_unused void *txq,
>> + __rte_unused struct rte_mbuf **tx_pkts,
>> + __rte_unused uint16_t nb_pkts)
>> +{
>> + RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
>> + rte_errno = ENOTSUP;
>> + return 0;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
>
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.
Sorry, but I see no point to hide it inside ethdev.
Of course, prototype should be reconsidered if we make
it ethdev-internal API available for drivers.
If so, I agree that the parameter should be port_id.
[snip]
>> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
>> index 3724429577..5721be7bdc 100644
>> --- a/lib/ethdev/ethdev_private.h
>> +++ b/lib/ethdev/ethdev_private.h
>> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>> /* Parse devargs value for representor parameter. */
>> int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>>
>> +/* reset eth fast-path API to dummy values */
>> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>> +
>> +/* setup eth fast-path API to ethdev values */
>> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> + const struct rte_eth_dev *dev);
>
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:
> 1. it is recommended that trace be deleted from the dummy function.
> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.
Good point.
[snip]
>> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
>> index 51cd68de94..d5853dff86 100644
>> --- a/lib/ethdev/rte_ethdev_core.h
>> +++ b/lib/ethdev/rte_ethdev_core.h
>> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>> typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>> /**< @internal Check the status of a Tx descriptor */
>>
>> +/**
>> + * @internal
>> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
>> + * queues data.
>> + * The main purpose to expose these pointers at all - allow compiler
>> + * to fetch this data for fast-path ethdev inline functions in advance.
>> + */
>> +struct rte_ethdev_qdata {
>> + void **data;
>> + /**< points to array of internal queue data pointers */
>> + void **clbk;
>> + /**< points to array of queue callback data pointers */
>> +};
>> +
>> +/**
>> + * @internal
>> + * fast-path ethdev functions and related data are hold in a flat array.
>> + * One entry per ethdev.
>> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
>> + * On 32-bit systems contents of this structure fits into one 64B line.
>> + */
>> +struct rte_eth_fp_ops {
>> +
>> + /**
>> + * Rx fast-path functions and related data.
>> + * 64-bit systems: occupies first 64B line
>> + */
>> + eth_rx_burst_t rx_pkt_burst;
>> + /**< PMD receive function. */
>> + eth_rx_queue_count_t rx_queue_count;
>> + /**< Get the number of used RX descriptors. */
>> + eth_rx_descriptor_status_t rx_descriptor_status;
>> + /**< Check the status of a Rx descriptor. */
>> + struct rte_ethdev_qdata rxq;
>> + /**< Rx queues data. */
>> + uintptr_t reserved1[3];
>> +
>> + /**
>> + * Tx fast-path functions and related data.
>> + * 64-bit systems: occupies second 64B line
>> + */
>> + eth_tx_burst_t tx_pkt_burst;
>
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.
+1 Very good question
If so, tx_pkt_prepare should be on the first cache-line
as well.
>> + /**< PMD transmit function. */
>> + eth_tx_prep_t tx_pkt_prepare;
>> + /**< PMD transmit prepare function. */
>> + eth_tx_descriptor_status_t tx_descriptor_status;
>> + /**< Check the status of a Tx descriptor. */
>> + struct rte_ethdev_qdata txq;
>> + /**< Tx queues data. */
>> + uintptr_t reserved2[3];
>> +
>> +} __rte_cache_aligned;
>> +
>> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>>
>> /**
>> * @internal
>>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
@ 2021-10-11 8:31 0% ` Thomas Monjalon
2021-10-11 16:58 0% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-12 8:50 0% ` [dpdk-dev] " Kinsella, Ray
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-11 8:31 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Stephen Hemminger, ray.kinsella,
bruce.richardson
08/10/2021 22:45, Akhil Goyal:
> In struct rte_security_ipsec_sa_options, for every new option
> added, there is an ABI breakage, to avoid, a reserved_opts
> bitfield is added to for the remaining bits available in the
> structure.
> Now for every new sa option, these reserved_opts can be reduced
> and new option can be added.
How do you make sure this field is initialized to 0?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
2021-10-08 6:41 4% ` [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit zhihongx.peng
2021-10-11 5:20 0% ` Peng, ZhihongX
@ 2021-10-11 8:25 0% ` Dmitry Kozlyuk
2021-10-13 1:53 0% ` Peng, ZhihongX
2021-10-13 1:52 4% ` [dpdk-dev] [PATCH v4 " zhihongx.peng
2 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-10-11 8:25 UTC (permalink / raw)
To: zhihongx.peng; +Cc: olivier.matz, dev, stable
2021-10-08 06:41 (UTC+0000), zhihongx.peng@intel.com:
> From: Zhihong Peng <zhihongx.peng@intel.com>
>
> Malloc cl in the cmdline_stdin_new function, so release in the
> cmdline_stdin_exit function is logical, so that cl will not be
> released alone.
>
> Fixes: af75078fece3 (first public release)
> Cc: stable@dpdk.org
As I have explained before, backporting this will introduce a double-free bug
in user apps unless their code are fixed, so it must not be done.
>
> Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 5 +++++
> lib/cmdline/cmdline_socket.c | 1 +
> 2 files changed, 6 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index efeffe37a0..be24925d16 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -191,6 +191,11 @@ API Changes
> the crypto/security operation. This field will be used to communicate
> events such as soft expiry with IPsec in lookaside mode.
>
> +* cmdline: The API cmdline_stdin_exit has added cmdline_free function.
> + Malloc cl in the cmdline_stdin_new function, so release in the
> + cmdline_stdin_exit function is logical. The application code
> + that calls cmdline_free needs to be deleted.
> +
There's probably no need to go into such details, suggestion:
* cmdline: ``cmdline_stdin_exit()`` now frees the ``cmdline`` structure.
Calls to ``cmdline_free()`` after it need to be deleted from applications.
>
> ABI Changes
> -----------
> diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
> index 998e8ade25..ebd5343754 100644
> --- a/lib/cmdline/cmdline_socket.c
> +++ b/lib/cmdline/cmdline_socket.c
> @@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
> return;
>
> terminal_restore(cl);
> + cmdline_free(cl);
> }
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
2021-10-09 12:05 0% ` fengchengwen
@ 2021-10-11 8:25 0% ` Andrew Rybchenko
2021-10-11 16:52 0% ` Ananyev, Konstantin
1 sibling, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-10-11 8:25 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
ferruh.yigit, mdr, jay.jayatheerthan
On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for fast-path calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The whole idea beyond this new schema:
> 1. PMDs keep to setup fast-path function pointers and related data
> inside rte_eth_dev struct in the same way they did it before.
> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
> (for secondary process) we call eth_dev_fp_ops_setup, which
> copies these function and data pointers into rte_eth_fp_ops[port_id].
> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
> we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
> into some dummy values.
> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
> flat array to call PMD specific functions.
> That approach should allow us to make rte_eth_devices[] private
> without introducing regression and help to avoid changes in drivers code.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Overall LGTM, few nits below.
> ---
> lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++
> lib/ethdev/ethdev_private.h | 7 +++++
> lib/ethdev/rte_ethdev.c | 27 ++++++++++++++++++
> lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
> 4 files changed, 141 insertions(+)
>
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 012cf73ca2..3eeda6e9f9 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> return str == NULL ? -1 : 0;
> }
> +
> +static uint16_t
> +dummy_eth_rx_burst(__rte_unused void *rxq,
> + __rte_unused struct rte_mbuf **rx_pkts,
> + __rte_unused uint16_t nb_pkts)
> +{
> + RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
May be "unconfigured" -> "stopped" ? Or "non-started" ?
> + rte_errno = ENOTSUP;
> + return 0;
> +}
> +
> +static uint16_t
> +dummy_eth_tx_burst(__rte_unused void *txq,
> + __rte_unused struct rte_mbuf **tx_pkts,
> + __rte_unused uint16_t nb_pkts)
> +{
> + RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
May be "unconfigured" -> "stopped" ?
> + rte_errno = ENOTSUP;
> + return 0;
> +}
> +
> +void
> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> +{
> + static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> + static const struct rte_eth_fp_ops dummy_ops = {
> + .rx_pkt_burst = dummy_eth_rx_burst,
> + .tx_pkt_burst = dummy_eth_tx_burst,
> + .rxq = {.data = dummy_data, .clbk = dummy_data,},
> + .txq = {.data = dummy_data, .clbk = dummy_data,},
> + };
> +
> + *fpo = dummy_ops;
> +}
> +
> +void
> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> + const struct rte_eth_dev *dev)
> +{
> + fpo->rx_pkt_burst = dev->rx_pkt_burst;
> + fpo->tx_pkt_burst = dev->tx_pkt_burst;
> + fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> + fpo->rx_queue_count = dev->rx_queue_count;
> + fpo->rx_descriptor_status = dev->rx_descriptor_status;
> + fpo->tx_descriptor_status = dev->tx_descriptor_status;
> +
> + fpo->rxq.data = dev->data->rx_queues;
> + fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> +
> + fpo->txq.data = dev->data->tx_queues;
> + fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> +}
> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> index 3724429577..5721be7bdc 100644
> --- a/lib/ethdev/ethdev_private.h
> +++ b/lib/ethdev/ethdev_private.h
> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> /* Parse devargs value for representor parameter. */
> int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>
> +/* reset eth fast-path API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth fast-path API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> + const struct rte_eth_dev *dev);
> +
> #endif /* _ETH_PRIVATE_H_ */
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c8abda6dd7..9f7a0cbb8c 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -44,6 +44,9 @@
> static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>
> +/* public fast-path API */
Shoudn't it be a doxygen style comment?
> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
> /* spinlock for eth device callbacks */
> static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>
[snip]
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 51cd68de94..d5853dff86 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> /**< @internal Check the status of a Tx descriptor */
>
> +/**
> + * @internal
> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for fast-path ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> + void **data;
> + /**< points to array of internal queue data pointers */
Please, put the documentation on the same like or just
put documentation before the documented member.
> + void **clbk;
> + /**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * fast-path ethdev functions and related data are hold in a flat array.
> + * One entry per ethdev.
> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> + * On 32-bit systems contents of this structure fits into one 64B line.
> + */
> +struct rte_eth_fp_ops {
> +
> + /**
> + * Rx fast-path functions and related data.
> + * 64-bit systems: occupies first 64B line
> + */
As I understand the above comment is for a group of below
fields. If so, Doxygen annocation for member groups should
be used.
> + eth_rx_burst_t rx_pkt_burst;
> + /**< PMD receive function. */
May I ask to avoid usage of documentation after member in a
separate line. It makes sense to if it is located in the same
line, but otherwise, it should be simply put before the
documented member.
[snip]
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v7 0/2] Support IOMMU for DMA device
@ 2021-10-11 7:59 3% ` Xuan Ding
0 siblings, 0 replies; 200+ results
From: Xuan Ding @ 2021-10-11 7:59 UTC (permalink / raw)
To: dev, anatoly.burakov, maxime.coquelin, chenbo.xia
Cc: jiayu.hu, cheng1.jiang, bruce.richardson, sunil.pai.g,
yinan.wang, yvonnex.yang, Xuan Ding
This series supports DMA device to use vfio in async vhost.
The first patch extends the capability of current vfio dma mapping
API to allow partial unmapping for adjacent memory if the platform
does not support partial unmapping. The second patch involves the
IOMMU programming for guest memory in async vhost.
v7:
* Fix an operator error.
v6:
* Fix a potential memory leak.
v5:
* Fix issue of a pointer be freed early.
v4:
* Fix a format issue.
v3:
* Move the async_map_status flag to virtio_net structure to avoid
ABI breaking.
v2:
* Add rte_errno filtering for some devices bound in the kernel driver.
* Add a flag to check the status of region mapping.
* Fix one typo.
Xuan Ding (2):
vfio: allow partially unmapping adjacent memory
vhost: enable IOMMU for async vhost
lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------
lib/vhost/vhost.h | 4 +
lib/vhost/vhost_user.c | 116 +++++++++++++-
3 files changed, 346 insertions(+), 112 deletions(-)
--
2.17.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count
2021-10-07 11:27 6% ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-11 8:06 0% ` Andrew Rybchenko
2021-10-12 17:59 0% ` Hyong Youb Kim (hyonkim)
1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 8:06 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
ferruh.yigit, mdr, jay.jayatheerthan
On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Currently majority of fast-path ethdev ops take pointers to internal
> queue data structures as an input parameter.
> While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
> index.
> For future work to hide rte_eth_devices[] and friends it would be
> plausible to unify parameters list of all fast-path ethdev ops.
> This patch changes eth_rx_queue_count() to accept pointer to internal
> queue data as input parameter.
> While this change is transparent to user, it still counts as an ABI change,
> as eth_rx_queue_count_t is used by ethdev public inline function
> rte_eth_rx_queue_count().
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The patch introduces a number usages of rte_eth_devices in
drivers. As I understand it is undesirable, but I don't
think it is a blocker of the patch series. It should be
addresses separately.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v9 0/5] Add PIE support for HQoS library
@ 2021-10-11 7:55 3% ` Liguzinski, WojciechX
0 siblings, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-10-11 7:55 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management (which is designed
to control the queue length but it does not control latency directly and is now being
obsoleted). However, more advanced queue management is required to address this problem
and provide desirable quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Liguzinski, WojciechX (5):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/autotest_data.py | 18 +
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 60 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 6 +-
examples/qos_sched/app_thread.c | 1 -
examples/qos_sched/cfg_file.c | 82 +-
examples/qos_sched/init.c | 7 +-
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 10 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 228 ++--
lib/sched/rte_sched.h | 53 +-
lib/sched/version.map | 3 +
19 files changed, 2050 insertions(+), 190 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
2021-10-08 7:44 0% ` Liu, Changpeng
@ 2021-10-11 6:58 0% ` Xia, Chenbo
0 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2021-10-11 6:58 UTC (permalink / raw)
To: Liu, Changpeng, David Marchand, Harris, James R
Cc: dev, ci, Aaron Conole, dpdklab, Zawadzki, Tomasz, alexeymar
Hi David & Changpeng,
> -----Original Message-----
> From: Liu, Changpeng <changpeng.liu@intel.com>
> Sent: Friday, October 8, 2021 3:45 PM
> To: David Marchand <david.marchand@redhat.com>; Harris, James R
> <james.r.harris@intel.com>
> Cc: Xia, Chenbo <chenbo.xia@intel.com>; dev@dpdk.org; ci@dpdk.org; Aaron
> Conole <aconole@redhat.com>; dpdklab <dpdklab@iol.unh.edu>; Zawadzki, Tomasz
> <tomasz.zawadzki@intel.com>; alexeymar@mellanox.com
> Subject: RE: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
>
> Thanks, I have worked with Chenbo to address this issue before. After enable
> the `ALLOW_INTERNAL_API` option, it works now with SPDK.
>
> Another issue raised by Jim Harris is that for distro packaged DPDK, since
> this option isn't enabled by default, this will not allow SPDK
> to use the distro packaged DPDK after this release.
I think for this problem, we have two options: enable driver sdk by default or
let OSV configure the option when building distros. I'm fine with either option.
@David, What do you think?
Thanks,
Chenbo
>
> > -----Original Message-----
> > From: David Marchand <david.marchand@redhat.com>
> > Sent: Friday, October 8, 2021 3:08 PM
> > To: Liu, Changpeng <changpeng.liu@intel.com>
> > Cc: Xia, Chenbo <chenbo.xia@intel.com>; Harris, James R
> > <james.r.harris@intel.com>; dev@dpdk.org; ci@dpdk.org; Aaron Conole
> > <aconole@redhat.com>; dpdklab <dpdklab@iol.unh.edu>; Zawadzki, Tomasz
> > <tomasz.zawadzki@intel.com>; alexeymar@mellanox.com
> > Subject: Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
> >
> > Hello,
> >
> > On Fri, Oct 8, 2021 at 8:15 AM Liu, Changpeng <changpeng.liu@intel.com>
> wrote:
> > >
> > > I tried the above DPDK patches, and got the following errors:
> > >
> > > pci.c:115:7: error: call to ‘rte_pci_read_config’ declared with attribute
> error:
> > Symbol is not public ABI
> > > 115 | rc = rte_pci_read_config(dev->dev_handle, value, len, offset);
> > > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > pci.c: In function ‘cfg_write_rte’:
> > > pci.c:125:7: error: call to ‘rte_pci_write_config’ declared with attribute
> error:
> > Symbol is not public ABI
> > > 125 | rc = rte_pci_write_config(dev->dev_handle, value, len, offset);
> > > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > pci.c: In function ‘register_rte_driver’:
> > > pci.c:375:2: error: call to ‘rte_pci_register’ declared with attribute
> error:
> > Symbol is not public ABI
> > > 375 | rte_pci_register(&driver->driver);
> >
> > I should have got this warning... but compilation passed fine for me.
> > Happy you tested it.
> >
> > >
> > > We may use the new added API to replace rte_pci_write_config and
> > rte_pci_read_config, but SPDK
> > > do require rte_pci_register().
> >
> > Since SPDK has a PCI driver, you'll need to compile code that calls
> > those PCI driver internal API with ALLOW_INTERNAL_API defined.
> > You can probably add a #define ALLOW_INTERNAL_API first thing (it's
> > important to have it defined before including any dpdk header) in
> > pci.c
> >
> > Another option, is to add it to lib/env_dpdk/env.mk:ENV_CFLAGS =
> > $(DPDK_INC) -DALLOW_EXPERIMENTAL_API.
> >
> > Can someone from SPDK take over this and sync with Chenbo?
> >
> >
> > Thanks.
> >
> > --
> > David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
2021-10-08 6:41 4% ` [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit zhihongx.peng
@ 2021-10-11 5:20 0% ` Peng, ZhihongX
2021-10-11 8:25 0% ` Dmitry Kozlyuk
2021-10-13 1:52 4% ` [dpdk-dev] [PATCH v4 " zhihongx.peng
2 siblings, 0 replies; 200+ results
From: Peng, ZhihongX @ 2021-10-11 5:20 UTC (permalink / raw)
To: dmitry.kozliuk; +Cc: dev, stable, olivier.matz
> -----Original Message-----
> From: Peng, ZhihongX <zhihongx.peng@intel.com>
> Sent: Friday, October 8, 2021 2:42 PM
> To: olivier.matz@6wind.com; dmitry.kozliuk@gmail.com
> Cc: dev@dpdk.org; Peng, ZhihongX <zhihongx.peng@intel.com>;
> stable@dpdk.org
> Subject: [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
>
> From: Zhihong Peng <zhihongx.peng@intel.com>
>
> Malloc cl in the cmdline_stdin_new function, so release in the
> cmdline_stdin_exit function is logical, so that cl will not be released alone.
>
> Fixes: af75078fece3 (first public release)
> Cc: stable@dpdk.org
>
> Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 5 +++++
> lib/cmdline/cmdline_socket.c | 1 +
> 2 files changed, 6 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index efeffe37a0..be24925d16 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -191,6 +191,11 @@ API Changes
> the crypto/security operation. This field will be used to communicate
> events such as soft expiry with IPsec in lookaside mode.
>
> +* cmdline: The API cmdline_stdin_exit has added cmdline_free function.
> + Malloc cl in the cmdline_stdin_new function, so release in the
> + cmdline_stdin_exit function is logical. The application code
> + that calls cmdline_free needs to be deleted.
> +
>
> ABI Changes
> -----------
> diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
> index 998e8ade25..ebd5343754 100644
> --- a/lib/cmdline/cmdline_socket.c
> +++ b/lib/cmdline/cmdline_socket.c
> @@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
> return;
>
> terminal_restore(cl);
> + cmdline_free(cl);
> }
> --
> 2.25.1
Hi, kozliuk
Can you give me an ack, I have submitted v3, I have added the release notes.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v10 1/8] bbdev: add device info related to data endianness
@ 2021-10-11 4:32 4% ` nipun.gupta
0 siblings, 0 replies; 200+ results
From: nipun.gupta @ 2021-10-11 4:32 UTC (permalink / raw)
To: dev, gakhil, nicolas.chautru; +Cc: david.marchand, hemant.agrawal, Nipun Gupta
From: Nicolas Chautru <nicolas.chautru@intel.com>
Adding device information to capture explicitly the assumption
of the input/output data byte endianness being processed.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
doc/guides/rel_notes/release_21_11.rst | 1 +
drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
drivers/baseband/null/bbdev_null.c | 6 ++++++
drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
lib/bbdev/rte_bbdev.h | 4 ++++
7 files changed, 15 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index c0a7f75518..135aa467f2 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -194,6 +194,7 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* bbdev: Added device info related to data byte endianness processing.
ABI Changes
-----------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..361e06cf94 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -1088,6 +1088,7 @@ acc100_dev_info_get(struct rte_bbdev *dev,
#else
dev_info->harq_buffer_size = 0;
#endif
+ dev_info->data_endianness = RTE_LITTLE_ENDIAN;
acc100_check_ir(d);
}
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..ee457f3071 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -372,6 +372,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
dev_info->default_queue_conf = default_queue_conf;
dev_info->capabilities = bbdev_capabilities;
dev_info->cpu_flag_reqs = NULL;
+ dev_info->data_endianness = RTE_LITTLE_ENDIAN;
/* Calculates number of queues assigned to device */
dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..703bb611a0 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -644,6 +644,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
dev_info->default_queue_conf = default_queue_conf;
dev_info->capabilities = bbdev_capabilities;
dev_info->cpu_flag_reqs = NULL;
+ dev_info->data_endianness = RTE_LITTLE_ENDIAN;
/* Calculates number of queues assigned to device */
dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/null/bbdev_null.c b/drivers/baseband/null/bbdev_null.c
index 53c538ba44..753d920e18 100644
--- a/drivers/baseband/null/bbdev_null.c
+++ b/drivers/baseband/null/bbdev_null.c
@@ -77,6 +77,12 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
dev_info->cpu_flag_reqs = NULL;
dev_info->min_alignment = 0;
+ /* BBDEV null device does not process the data, so
+ * endianness setting is not relevant, but setting it
+ * here for code completeness.
+ */
+ dev_info->data_endianness = RTE_LITTLE_ENDIAN;
+
rte_bbdev_log_debug("got device info from %u", dev->data->dev_id);
}
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index 77e9a2ecbc..7dfeec665a 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -251,6 +251,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
dev_info->capabilities = bbdev_capabilities;
dev_info->min_alignment = 64;
dev_info->harq_buffer_size = 0;
+ dev_info->data_endianness = RTE_LITTLE_ENDIAN;
rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
}
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 3ebf62e697..e863bd913f 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -309,6 +309,10 @@ struct rte_bbdev_driver_info {
uint16_t min_alignment;
/** HARQ memory available in kB */
uint32_t harq_buffer_size;
+ /** Byte endianness (RTE_BIG_ENDIAN/RTE_LITTLE_ENDIAN) supported
+ * for input/output data
+ */
+ uint8_t data_endianness;
/** Default queue configuration used if none is supplied */
struct rte_bbdev_queue_conf default_queue_conf;
/** Device operation capabilities */
--
2.17.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
2021-10-09 12:05 0% ` fengchengwen
@ 2021-10-11 1:18 0% ` fengchengwen
2021-10-11 8:35 0% ` Andrew Rybchenko
2021-10-11 15:15 0% ` Ananyev, Konstantin
2 siblings, 0 replies; 200+ results
From: fengchengwen @ 2021-10-11 1:18 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan
Sorry to self-reply.
I think it's better the 'struct rte_eth_dev *dev' hold a pointer to the
'struct rte_eth_fp_ops', e.g.
struct rte_eth_dev {
struct rte_eth_fp_ops *fp_ops;
... // other field
}
The eth framework set the pointer in the rte_eth_dev_pci_allocate(), and driver fill
corresponding callback:
dev->fp_ops->rx_pkt_burst = xxx_recv_pkts;
dev->fp_ops->tx_pkt_burst = xxx_xmit_pkts;
...
In this way, the behavior of the primary and secondary processes can be unified, which
is basically the same as that of the original process.
On 2021/10/9 20:05, fengchengwen wrote:
> On 2021/10/7 19:27, Konstantin Ananyev wrote:
>> Copy public function pointers (rx_pkt_burst(), etc.) and related
>> pointers to internal data from rte_eth_dev structure into a
>> separate flat array. That array will remain in a public header.
>> The intention here is to make rte_eth_dev and related structures internal.
>> That should allow future possible changes to core eth_dev structures
>> to be transparent to the user and help to avoid ABI/API breakages.
>> The plan is to keep minimal part of data from rte_eth_dev public,
>> so we still can use inline functions for fast-path calls
>> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>> The whole idea beyond this new schema:
>> 1. PMDs keep to setup fast-path function pointers and related data
>> inside rte_eth_dev struct in the same way they did it before.
>> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>> (for secondary process) we call eth_dev_fp_ops_setup, which
>> copies these function and data pointers into rte_eth_fp_ops[port_id].
>> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>> we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>> into some dummy values.
>> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>> flat array to call PMD specific functions.
>> That approach should allow us to make rte_eth_devices[] private
>> without introducing regression and help to avoid changes in drivers code.
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>> lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++
>> lib/ethdev/ethdev_private.h | 7 +++++
>> lib/ethdev/rte_ethdev.c | 27 ++++++++++++++++++
>> lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>> 4 files changed, 141 insertions(+)
>>
>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>> index 012cf73ca2..3eeda6e9f9 100644
>> --- a/lib/ethdev/ethdev_private.c
>> +++ b/lib/ethdev/ethdev_private.c
>> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>> RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>> return str == NULL ? -1 : 0;
>> }
>> +
>> +static uint16_t
>> +dummy_eth_rx_burst(__rte_unused void *rxq,
>> + __rte_unused struct rte_mbuf **rx_pkts,
>> + __rte_unused uint16_t nb_pkts)
>> +{
>> + RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>> + rte_errno = ENOTSUP;
>> + return 0;
>> +}
>> +
>> +static uint16_t
>> +dummy_eth_tx_burst(__rte_unused void *txq,
>> + __rte_unused struct rte_mbuf **tx_pkts,
>> + __rte_unused uint16_t nb_pkts)
>> +{
>> + RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
>> + rte_errno = ENOTSUP;
>> + return 0;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
>
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.
>
>> +{
>> + static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
>> + static const struct rte_eth_fp_ops dummy_ops = {
>> + .rx_pkt_burst = dummy_eth_rx_burst,
>> + .tx_pkt_burst = dummy_eth_tx_burst,
>> + .rxq = {.data = dummy_data, .clbk = dummy_data,},
>> + .txq = {.data = dummy_data, .clbk = dummy_data,},
>> + };
>> +
>> + *fpo = dummy_ops;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> + const struct rte_eth_dev *dev)
>
> Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
> port_id parameter.
>
>> +{
>> + fpo->rx_pkt_burst = dev->rx_pkt_burst;
>> + fpo->tx_pkt_burst = dev->tx_pkt_burst;
>> + fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
>> + fpo->rx_queue_count = dev->rx_queue_count;
>> + fpo->rx_descriptor_status = dev->rx_descriptor_status;
>> + fpo->tx_descriptor_status = dev->tx_descriptor_status;
>> +
>> + fpo->rxq.data = dev->data->rx_queues;
>> + fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
>> +
>> + fpo->txq.data = dev->data->tx_queues;
>> + fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>> +}
>> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
>> index 3724429577..5721be7bdc 100644
>> --- a/lib/ethdev/ethdev_private.h
>> +++ b/lib/ethdev/ethdev_private.h
>> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>> /* Parse devargs value for representor parameter. */
>> int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>>
>> +/* reset eth fast-path API to dummy values */
>> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>> +
>> +/* setup eth fast-path API to ethdev values */
>> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> + const struct rte_eth_dev *dev);
>
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:
> 1. it is recommended that trace be deleted from the dummy function.
> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.
>
>> +
>> #endif /* _ETH_PRIVATE_H_ */
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index c8abda6dd7..9f7a0cbb8c 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -44,6 +44,9 @@
>> static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>> struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>>
>> +/* public fast-path API */
>> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>> /* spinlock for eth device callbacks */
>> static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>>
>> @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
>> rte_eth_dev_callback_process(eth_dev,
>> RTE_ETH_EVENT_DESTROY, NULL);
>>
>> + eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
>> +
>> rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
>>
>> eth_dev->state = RTE_ETH_DEV_UNUSED;
>> @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
>> (*dev->dev_ops->link_update)(dev, 0);
>> }
>>
>> + /* expose selection of PMD fast-path functions */
>> + eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
>> +
>> rte_ethdev_trace_start(port_id);
>> return 0;
>> }
>> @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
>> return 0;
>> }
>>
>> + /* point fast-path functions to dummy ones */
>> + eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
>> +
>> dev->data->dev_started = 0;
>> ret = (*dev->dev_ops->dev_stop)(dev);
>> rte_ethdev_trace_stop(port_id, ret);
>> @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
>> return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
>> }
>>
>> +RTE_INIT(eth_dev_init_fp_ops)
>> +{
>> + uint32_t i;
>> +
>> + for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
>> + eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
>> +}
>> +
>> RTE_INIT(eth_dev_init_cb_lists)
>> {
>> uint16_t i;
>> @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
>> if (dev == NULL)
>> return;
>>
>> + /*
>> + * for secondary process, at that point we expect device
>> + * to be already 'usable', so shared data and all function pointers
>> + * for fast-path devops have to be setup properly inside rte_eth_dev.
>> + */
>> + if (rte_eal_process_type() == RTE_PROC_SECONDARY)
>> + eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
>> +
>> rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>>
>> dev->state = RTE_ETH_DEV_ATTACHED;
>> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
>> index 51cd68de94..d5853dff86 100644
>> --- a/lib/ethdev/rte_ethdev_core.h
>> +++ b/lib/ethdev/rte_ethdev_core.h
>> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>> typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>> /**< @internal Check the status of a Tx descriptor */
>>
>> +/**
>> + * @internal
>> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
>> + * queues data.
>> + * The main purpose to expose these pointers at all - allow compiler
>> + * to fetch this data for fast-path ethdev inline functions in advance.
>> + */
>> +struct rte_ethdev_qdata {
>> + void **data;
>> + /**< points to array of internal queue data pointers */
>> + void **clbk;
>> + /**< points to array of queue callback data pointers */
>> +};
>> +
>> +/**
>> + * @internal
>> + * fast-path ethdev functions and related data are hold in a flat array.
>> + * One entry per ethdev.
>> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
>> + * On 32-bit systems contents of this structure fits into one 64B line.
>> + */
>> +struct rte_eth_fp_ops {
>> +
>> + /**
>> + * Rx fast-path functions and related data.
>> + * 64-bit systems: occupies first 64B line
>> + */
>> + eth_rx_burst_t rx_pkt_burst;
>> + /**< PMD receive function. */
>> + eth_rx_queue_count_t rx_queue_count;
>> + /**< Get the number of used RX descriptors. */
>> + eth_rx_descriptor_status_t rx_descriptor_status;
>> + /**< Check the status of a Rx descriptor. */
>> + struct rte_ethdev_qdata rxq;
>> + /**< Rx queues data. */
>> + uintptr_t reserved1[3];
>> +
>> + /**
>> + * Tx fast-path functions and related data.
>> + * 64-bit systems: occupies second 64B line
>> + */
>> + eth_tx_burst_t tx_pkt_burst;
>
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.
>
>> + /**< PMD transmit function. */
>> + eth_tx_prep_t tx_pkt_prepare;
>> + /**< PMD transmit prepare function. */
>> + eth_tx_descriptor_status_t tx_descriptor_status;
>> + /**< Check the status of a Tx descriptor. */
>> + struct rte_ethdev_qdata txq;
>> + /**< Tx queues data. */
>> + uintptr_t reserved2[3];
>> +
>> +} __rte_cache_aligned;
>> +
>> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>>
>> /**
>> * @internal
>>
>
>
> .
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 1/5] ethdev: update modify field flow action
@ 2021-10-10 23:45 8% ` Viacheslav Ovsiienko
2021-10-11 9:54 3% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Viacheslav Ovsiienko @ 2021-10-10 23:45 UTC (permalink / raw)
To: dev; +Cc: rasland, matan, shahafs, orika, getelson, thomas
The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:
- immediate source can be presented either as an unsigned
64-bit integer or pointer to data pattern in memory.
There was no explicit pointer field defined in the union.
- the byte ordering for 64-bit integer was not specified.
Many fields have shorter lengths and byte ordering
is crucial.
- how the bit offset is applied to the immediate source
field was not defined and documented.
- 64-bit integer size is not enough to provide IPv6
addresses.
In order to cover the issues and exclude any ambiguities
the following is done:
- introduce the explicit pointer field
in rte_flow_action_modify_data structure
- replace the 64-bit unsigned integer with 16-byte array
- update the modify field flow action documentation
[1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 16 ++++++++++++++++
doc/guides/rel_notes/release_21_11.rst | 9 +++++++++
lib/ethdev/rte_flow.h | 17 ++++++++++++++---
3 files changed, 39 insertions(+), 3 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..1ceecb399f 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2835,6 +2835,22 @@ a packet to any other part of it.
``value`` sets an immediate value to be used as a source or points to a
location of the value in memory. It is used instead of ``level`` and ``offset``
for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+The data in memory should be presented exactly in the same byte order and
+length as in the relevant flow item, i.e. data for field with type
+RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field
+in rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
+rte_flow_item_ipv6 conventions, and so on. If the field size is large than
+16 bytes the pattern can be provided as pointer only.
+
+The bitfield extracted from the memory being applied as second operation
+parameter is defined by action width and by the destination field offset.
+Application should provide the data in immediate value memory (either as
+buffer or by pointer) exactly as item field without any applied explicit offset,
+and destination packet field (with specified width and bit offset) will be
+replaced by immediate source bits from the same bit offset. For example,
+to replace the third byte of MAC address with value 0x85, application should
+specify destination width as 8, destination width as 16, and provide immediate
+value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
.. _table_rte_flow_action_modify_field:
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..41a087d7c1 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -187,6 +187,13 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate data
+ array is extended, data pointer field is explicitly added to union, the
+ action behavior is defined in more strict fashion and documentation updated.
+ The immediate value behavior has been changed, the entire immediate field
+ should be provided, and offset for immediate source bitfield is assigned
+ from destination one.
+
ABI Changes
-----------
@@ -222,6 +229,8 @@ ABI Changes
``rte_security_ipsec_xform`` to allow applications to configure SA soft
and hard expiry limits. Limits can be either in number of packets or bytes.
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated.
+
Known Issues
------------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..953924d42b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3204,6 +3204,9 @@ enum rte_flow_field_id {
};
/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
* Field description for MODIFY_FIELD action.
*/
struct rte_flow_action_modify_data {
@@ -3217,10 +3220,18 @@ struct rte_flow_action_modify_data {
uint32_t offset;
};
/**
- * Immediate value for RTE_FLOW_FIELD_VALUE or
- * memory address for RTE_FLOW_FIELD_POINTER.
+ * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
+ * same byte order and length as in relevant rte_flow_item_xxx.
+ * The immediate source bitfield offset is inherited from
+ * the destination's one.
*/
- uint64_t value;
+ uint8_t value[16];
+ /*
+ * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
+ * should be the same as for relevant field in the
+ * rte_flow_item_xxx structure.
+ */
+ void *pvalue;
};
};
--
2.18.1
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
@ 2021-10-09 12:05 0% ` fengchengwen
2021-10-11 1:18 0% ` fengchengwen
` (2 more replies)
2021-10-11 8:25 0% ` Andrew Rybchenko
1 sibling, 3 replies; 200+ results
From: fengchengwen @ 2021-10-09 12:05 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan
On 2021/10/7 19:27, Konstantin Ananyev wrote:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for fast-path calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The whole idea beyond this new schema:
> 1. PMDs keep to setup fast-path function pointers and related data
> inside rte_eth_dev struct in the same way they did it before.
> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
> (for secondary process) we call eth_dev_fp_ops_setup, which
> copies these function and data pointers into rte_eth_fp_ops[port_id].
> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
> we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
> into some dummy values.
> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
> flat array to call PMD specific functions.
> That approach should allow us to make rte_eth_devices[] private
> without introducing regression and help to avoid changes in drivers code.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++
> lib/ethdev/ethdev_private.h | 7 +++++
> lib/ethdev/rte_ethdev.c | 27 ++++++++++++++++++
> lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
> 4 files changed, 141 insertions(+)
>
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 012cf73ca2..3eeda6e9f9 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> return str == NULL ? -1 : 0;
> }
> +
> +static uint16_t
> +dummy_eth_rx_burst(__rte_unused void *rxq,
> + __rte_unused struct rte_mbuf **rx_pkts,
> + __rte_unused uint16_t nb_pkts)
> +{
> + RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> + rte_errno = ENOTSUP;
> + return 0;
> +}
> +
> +static uint16_t
> +dummy_eth_tx_burst(__rte_unused void *txq,
> + __rte_unused struct rte_mbuf **tx_pkts,
> + __rte_unused uint16_t nb_pkts)
> +{
> + RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> + rte_errno = ENOTSUP;
> + return 0;
> +}
> +
> +void
> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.
> +{
> + static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> + static const struct rte_eth_fp_ops dummy_ops = {
> + .rx_pkt_burst = dummy_eth_rx_burst,
> + .tx_pkt_burst = dummy_eth_tx_burst,
> + .rxq = {.data = dummy_data, .clbk = dummy_data,},
> + .txq = {.data = dummy_data, .clbk = dummy_data,},
> + };
> +
> + *fpo = dummy_ops;
> +}
> +
> +void
> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> + const struct rte_eth_dev *dev)
Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
port_id parameter.
> +{
> + fpo->rx_pkt_burst = dev->rx_pkt_burst;
> + fpo->tx_pkt_burst = dev->tx_pkt_burst;
> + fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> + fpo->rx_queue_count = dev->rx_queue_count;
> + fpo->rx_descriptor_status = dev->rx_descriptor_status;
> + fpo->tx_descriptor_status = dev->tx_descriptor_status;
> +
> + fpo->rxq.data = dev->data->rx_queues;
> + fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> +
> + fpo->txq.data = dev->data->tx_queues;
> + fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> +}
> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> index 3724429577..5721be7bdc 100644
> --- a/lib/ethdev/ethdev_private.h
> +++ b/lib/ethdev/ethdev_private.h
> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> /* Parse devargs value for representor parameter. */
> int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>
> +/* reset eth fast-path API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth fast-path API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> + const struct rte_eth_dev *dev);
Some drivers control the transmit/receive function during operation. E.g.
for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
process reset, primary process will set the correct rx/tx burst. During this process, the
send and receive threads are still working, but the bursts they call are changed. So:
1. it is recommended that trace be deleted from the dummy function.
2. public the eth_dev_fp_ops_reset/setup interface for driver usage.
> +
> #endif /* _ETH_PRIVATE_H_ */
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c8abda6dd7..9f7a0cbb8c 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -44,6 +44,9 @@
> static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>
> +/* public fast-path API */
> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
> /* spinlock for eth device callbacks */
> static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>
> @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
> rte_eth_dev_callback_process(eth_dev,
> RTE_ETH_EVENT_DESTROY, NULL);
>
> + eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
> +
> rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
>
> eth_dev->state = RTE_ETH_DEV_UNUSED;
> @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
> (*dev->dev_ops->link_update)(dev, 0);
> }
>
> + /* expose selection of PMD fast-path functions */
> + eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> +
> rte_ethdev_trace_start(port_id);
> return 0;
> }
> @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
> return 0;
> }
>
> + /* point fast-path functions to dummy ones */
> + eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> +
> dev->data->dev_started = 0;
> ret = (*dev->dev_ops->dev_stop)(dev);
> rte_ethdev_trace_stop(port_id, ret);
> @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
> return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
> }
>
> +RTE_INIT(eth_dev_init_fp_ops)
> +{
> + uint32_t i;
> +
> + for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
> + eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
> +}
> +
> RTE_INIT(eth_dev_init_cb_lists)
> {
> uint16_t i;
> @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
> if (dev == NULL)
> return;
>
> + /*
> + * for secondary process, at that point we expect device
> + * to be already 'usable', so shared data and all function pointers
> + * for fast-path devops have to be setup properly inside rte_eth_dev.
> + */
> + if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> + eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> +
> rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>
> dev->state = RTE_ETH_DEV_ATTACHED;
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 51cd68de94..d5853dff86 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> /**< @internal Check the status of a Tx descriptor */
>
> +/**
> + * @internal
> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for fast-path ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> + void **data;
> + /**< points to array of internal queue data pointers */
> + void **clbk;
> + /**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * fast-path ethdev functions and related data are hold in a flat array.
> + * One entry per ethdev.
> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> + * On 32-bit systems contents of this structure fits into one 64B line.
> + */
> +struct rte_eth_fp_ops {
> +
> + /**
> + * Rx fast-path functions and related data.
> + * 64-bit systems: occupies first 64B line
> + */
> + eth_rx_burst_t rx_pkt_burst;
> + /**< PMD receive function. */
> + eth_rx_queue_count_t rx_queue_count;
> + /**< Get the number of used RX descriptors. */
> + eth_rx_descriptor_status_t rx_descriptor_status;
> + /**< Check the status of a Rx descriptor. */
> + struct rte_ethdev_qdata rxq;
> + /**< Rx queues data. */
> + uintptr_t reserved1[3];
> +
> + /**
> + * Tx fast-path functions and related data.
> + * 64-bit systems: occupies second 64B line
> + */
> + eth_tx_burst_t tx_pkt_burst;
Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.
> + /**< PMD transmit function. */
> + eth_tx_prep_t tx_pkt_prepare;
> + /**< PMD transmit prepare function. */
> + eth_tx_descriptor_status_t tx_descriptor_status;
> + /**< Check the status of a Tx descriptor. */
> + struct rte_ethdev_qdata txq;
> + /**< Tx queues data. */
> + uintptr_t reserved2[3];
> +
> +} __rte_cache_aligned;
> +
> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
>
> /**
> * @internal
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v24 0/6] support dmadev
@ 2021-10-09 9:33 3% ` Chengwen Feng
0 siblings, 0 replies; 200+ results
From: Chengwen Feng @ 2021-10-09 9:33 UTC (permalink / raw)
To: thomas, ferruh.yigit, bruce.richardson, jerinj, jerinjacobk,
andrew.rybchenko
Cc: dev, mb, nipun.gupta, hemant.agrawal, maxime.coquelin,
honnappa.nagarahalli, david.marchand, sburla, pkapoor,
konstantin.ananyev, conor.walsh, kevin.laatz
This patch set contains six patch for new add dmadev.
Chengwen Feng (6):
dmadev: introduce DMA device library
dmadev: add control plane API support
dmadev: add data plane API support
dmadev: add multi-process support
dma/skeleton: introduce skeleton dmadev driver
app/test: add dmadev API test
---
v24:
* use rte_dma_fp_object to hide implementation details.
* support group doxygen for RTE_DMA_CAPA_* and RTE_DMA_OP_*.
* adjusted the naming of some functions.
* fix typo.
v23:
* split multi-process support from 1st patch.
* fix some static check warning.
* fix skeleton cpu thread zero_req_count flip bug.
* add test_dmadev_api.h.
* add the description of modifying the dmadev state when init OK.
v22:
* function prefix change from rte_dmadev_* to rte_dma_*.
* change to prefix comment in most scenarios.
* dmadev dev_id use int16_t type.
* fix typo.
* organize patchsets in incremental mode.
v21:
* add comment for reserved fields of struct rte_dmadev.
v20:
* delete unnecessary and duplicate include header files.
* the conf_sz parameter is added to the configure and vchan-setup
callbacks of the PMD, this is mainly used to enhance ABI
compatibility.
* the rte_dmadev structure field is rearranged to reserve more space
for I/O functions.
* fix some ambiguous and unnecessary comments.
* fix the potential memory leak of ut.
* redefine skeldma_init_once to skeldma_count.
* suppress rte_dmadev error output when execute ut.
MAINTAINERS | 7 +
app/test/meson.build | 4 +
app/test/test_dmadev.c | 41 +
app/test/test_dmadev_api.c | 574 +++++++++++++
app/test/test_dmadev_api.h | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/dmadevs/index.rst | 12 +
doc/guides/index.rst | 1 +
doc/guides/prog_guide/dmadev.rst | 120 +++
doc/guides/prog_guide/img/dmadev.svg | 283 +++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 6 +
drivers/dma/meson.build | 6 +
drivers/dma/skeleton/meson.build | 7 +
drivers/dma/skeleton/skeleton_dmadev.c | 571 +++++++++++++
drivers/dma/skeleton/skeleton_dmadev.h | 61 ++
drivers/dma/skeleton/version.map | 3 +
drivers/meson.build | 1 +
lib/dmadev/meson.build | 7 +
lib/dmadev/rte_dmadev.c | 844 +++++++++++++++++++
lib/dmadev/rte_dmadev.h | 1048 ++++++++++++++++++++++++
lib/dmadev/rte_dmadev_core.h | 78 ++
lib/dmadev/rte_dmadev_pmd.h | 173 ++++
lib/dmadev/version.map | 35 +
lib/meson.build | 1 +
26 files changed, 3891 insertions(+)
create mode 100644 app/test/test_dmadev.c
create mode 100644 app/test/test_dmadev_api.c
create mode 100644 app/test/test_dmadev_api.h
create mode 100644 doc/guides/dmadevs/index.rst
create mode 100644 doc/guides/prog_guide/dmadev.rst
create mode 100644 doc/guides/prog_guide/img/dmadev.svg
create mode 100644 drivers/dma/meson.build
create mode 100644 drivers/dma/skeleton/meson.build
create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c
create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h
create mode 100644 drivers/dma/skeleton/version.map
create mode 100644 lib/dmadev/meson.build
create mode 100644 lib/dmadev/rte_dmadev.c
create mode 100644 lib/dmadev/rte_dmadev.h
create mode 100644 lib/dmadev/rte_dmadev_core.h
create mode 100644 lib/dmadev/rte_dmadev_pmd.h
create mode 100644 lib/dmadev/version.map
--
2.33.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v16 0/9] eal: Add EAL API for threading
2021-10-08 22:40 3% ` [dpdk-dev] [PATCH v15 " Narcisa Ana Maria Vasile
@ 2021-10-09 7:41 3% ` Narcisa Ana Maria Vasile
0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-10-09 7:41 UTC (permalink / raw)
To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
talshn, ocardona
Cc: bruce.richardson, david.marchand, pallavi.kadam
From: Narcisa Vasile <navasile@microsoft.com>
EAL thread API
**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.
**Goals**
* Introduce a generic EAL API for threading support that will remove
the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
3rd party thread library through a configuration option.
**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)
**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();
lib/librte_eal/common/rte_thread.c
int rte_thread_create()
{
return pthread_create();
}
lib/librte_eal/windows/rte_thread.c
int rte_thread_create()
{
return CreateThread();
}
-----------------------------------------------------
**Thread attributes**
When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:
typedef struct
{
enum rte_thread_priority priority;
rte_cpuset_t cpuset;
} rte_thread_attr_t;
The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.
*Priority* is represented through an enum that currently advertises
two values for priority:
- RTE_THREAD_PRIORITY_NORMAL
- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority - sets the priority of a thread
rte_thread_get_priority - retrieves the priority of a thread
from the OS
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
with a new value for priority
The user can choose thread priority through an EAL parameter,
when starting an application. If EAL parameter is not used,
the per-platform default value for thread priority is used.
Otherwise administrator has an option to set one of available options:
--thread-prio normal
--thread-prio realtime
Example:
./dpdk-l2fwd -l 0-3 -n 4 –thread-prio normal -- -q 8 -p ffff
*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
rte_thread_attr_t object
rte_thread_set/get_affinity – sets/gets the affinity of a thread
**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided.
**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Additional functionality offered by pthread_*
(such as pthread_setname_np, etc.)
v16:
- Fix warning on freebsd by adding cast
- Change affinity unit test to consider ases when the requested CPU
are not available on the system.
- Fix priority unit test to avoid termination of thread before the
priority is checked.
v15:
- Add try_lock mutex functionality. If the mutex is already owned by a
different thread, the function returns immediately. Otherwise,
the mutex will be acquired.
- Add function for getting the priority of a thread.
An auxiliary function that translates the OS priority to the
EAL accepted ones is added.
- Fix unit tests logging, add descriptive asserts that mark test failures.
Verify mutex locking, verify barrier return values. Add test for
statically initialized mutexes.
- Fix Alpine build by removing the use of pthread_attr_set_affinity() and
using pthread_set_affinity() after the thread is created.
v14:
- Remove patch "eal: add EAL argument for setting thread priority"
This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
as the default.
- Fix issue with thread return value.
v13:
- Fix syntax error in unit tests
v12:
- Fix freebsd warning about initializer in unit tests
v11:
- Add unit tests for thread API
- Rebase
v10:
- Remove patch no. 10. It will be broken down in subpatches
and sent as a different patchset that depends on this one.
This is done due to the ABI breaks that would be caused by patch 10.
- Replace unix/rte_thread.c with common/rte_thread.c
- Remove initializations that may prevent compiler from issuing useful
warnings.
- Remove rte_thread_types.h and rte_windows_thread_types.h
- Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
- Remove functions that retrieves thread handle from process handle
- Remove rte_thread_cancel() until same behavior is obtained on
all platforms.
- Fix rte_thread_detach() function description,
return value and remove empty line.
- Reimplement mutex functions. Add compatible representation for mutex
identifier. Add macro to replace static mutex initialization instances.
- Fix commit messages (lines too long, remove unicode symbols)
v9:
- Sign patches
v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value
v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.
v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()
v5:
- update cover letter with more details on the priority argument
v4:
- fix function description
- rebase
v3:
- rebase
v2:
- revert changes that break ABI
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c
Narcisa Vasile (9):
eal: add basic threading functions
eal: add thread attributes
eal/windows: translate Windows errors to errno-style errors
eal: implement functions for thread affinity management
eal: implement thread priority management functions
eal: add thread lifetime management
eal: implement functions for mutex management
eal: implement functions for thread barrier management
Add unit tests for thread API
app/test/meson.build | 2 +
app/test/test_threads.c | 372 ++++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/common/rte_thread.c | 497 ++++++++++++++++++++++++
lib/eal/include/rte_thread.h | 435 ++++++++++++++++++++-
lib/eal/unix/meson.build | 1 -
lib/eal/unix/rte_thread.c | 92 -----
lib/eal/version.map | 22 ++
lib/eal/windows/eal_lcore.c | 176 ++++++---
lib/eal/windows/eal_windows.h | 10 +
lib/eal/windows/include/sched.h | 2 +-
lib/eal/windows/rte_thread.c | 656 ++++++++++++++++++++++++++++++--
12 files changed, 2093 insertions(+), 173 deletions(-)
create mode 100644 app/test/test_threads.c
create mode 100644 lib/eal/common/rte_thread.c
delete mode 100644 lib/eal/unix/rte_thread.c
--
2.31.0.vfs.0.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v15 0/9] eal: Add EAL API for threading
@ 2021-10-08 22:40 3% ` Narcisa Ana Maria Vasile
2021-10-09 7:41 3% ` [dpdk-dev] [PATCH v16 " Narcisa Ana Maria Vasile
0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-10-08 22:40 UTC (permalink / raw)
To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
talshn, ocardona
Cc: bruce.richardson, david.marchand, pallavi.kadam
From: Narcisa Vasile <navasile@microsoft.com>
EAL thread API
**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.
**Goals**
* Introduce a generic EAL API for threading support that will remove
the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
3rd party thread library through a configuration option.
**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)
**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();
lib/librte_eal/common/rte_thread.c
int rte_thread_create()
{
return pthread_create();
}
lib/librte_eal/windows/rte_thread.c
int rte_thread_create()
{
return CreateThread();
}
-----------------------------------------------------
**Thread attributes**
When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:
typedef struct
{
enum rte_thread_priority priority;
rte_cpuset_t cpuset;
} rte_thread_attr_t;
The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.
*Priority* is represented through an enum that currently advertises
two values for priority:
- RTE_THREAD_PRIORITY_NORMAL
- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority - sets the priority of a thread
rte_thread_get_priority - retrieves the priority of a thread
from the OS
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
with a new value for priority
The user can choose thread priority through an EAL parameter,
when starting an application. If EAL parameter is not used,
the per-platform default value for thread priority is used.
Otherwise administrator has an option to set one of available options:
--thread-prio normal
--thread-prio realtime
Example:
./dpdk-l2fwd -l 0-3 -n 4 –thread-prio normal -- -q 8 -p ffff
*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
rte_thread_attr_t object
rte_thread_set/get_affinity – sets/gets the affinity of a thread
**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided.
**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Additional functionality offered by pthread_*
(such as pthread_setname_np, etc.)
v15:
- Add try_lock mutex functionality. If the mutex is already owned by a
different thread, the function returns immediately. Otherwise,
the mutex will be acquired.
- Add function for getting the priority of a thread.
An auxiliary function that translates the OS priority to the
EAL accepted ones is added.
- Fix unit tests logging, add descriptive asserts that mark test failures.
Verify mutex locking, verify barrier return values. Add test for
statically initialized mutexes.
- Fix Alpine build by removing the use of pthread_attr_set_affinity() and
using pthread_set_affinity() after the thread is created.
v14:
- Remove patch "eal: add EAL argument for setting thread priority"
This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
as the default.
- Fix issue with thread return value.
v13:
- Fix syntax error in unit tests
v12:
- Fix freebsd warning about initializer in unit tests
v11:
- Add unit tests for thread API
- Rebase
v10:
- Remove patch no. 10. It will be broken down in subpatches
and sent as a different patchset that depends on this one.
This is done due to the ABI breaks that would be caused by patch 10.
- Replace unix/rte_thread.c with common/rte_thread.c
- Remove initializations that may prevent compiler from issuing useful
warnings.
- Remove rte_thread_types.h and rte_windows_thread_types.h
- Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
- Remove functions that retrieves thread handle from process handle
- Remove rte_thread_cancel() until same behavior is obtained on
all platforms.
- Fix rte_thread_detach() function description,
return value and remove empty line.
- Reimplement mutex functions. Add compatible representation for mutex
identifier. Add macro to replace static mutex initialization instances.
- Fix commit messages (lines too long, remove unicode symbols)
v9:
- Sign patches
v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value
v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.
v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()
v5:
- update cover letter with more details on the priority argument
v4:
- fix function description
- rebase
v3:
- rebase
v2:
- revert changes that break ABI
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c
Narcisa Vasile (9):
eal: add basic threading functions
eal: add thread attributes
eal/windows: translate Windows errors to errno-style errors
eal: implement functions for thread affinity management
eal: implement thread priority management functions
eal: add thread lifetime management
eal: implement functions for mutex management
eal: implement functions for thread barrier management
Add unit tests for thread API
app/test/meson.build | 2 +
app/test/test_threads.c | 359 +++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/common/rte_thread.c | 496 ++++++++++++++++++++++++
lib/eal/include/rte_thread.h | 435 ++++++++++++++++++++-
lib/eal/unix/meson.build | 1 -
lib/eal/unix/rte_thread.c | 92 -----
lib/eal/version.map | 22 ++
lib/eal/windows/eal_lcore.c | 176 ++++++---
lib/eal/windows/eal_windows.h | 10 +
lib/eal/windows/include/sched.h | 2 +-
lib/eal/windows/rte_thread.c | 656 ++++++++++++++++++++++++++++++--
12 files changed, 2079 insertions(+), 173 deletions(-)
create mode 100644 app/test/test_threads.c
create mode 100644 lib/eal/common/rte_thread.c
delete mode 100644 lib/eal/unix/rte_thread.c
--
2.31.0.vfs.0.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Akhil Goyal
@ 2021-10-08 20:45 3% ` Akhil Goyal
2021-10-11 8:31 0% ` Thomas Monjalon
2021-10-12 8:50 0% ` [dpdk-dev] " Kinsella, Ray
2021-10-11 10:46 0% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Zhang, Roy Fan
2021-10-12 9:55 3% ` Kinsella, Ray
2 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2021-10-08 20:45 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Akhil Goyal
In struct rte_security_ipsec_sa_options, for every new option
added, there is an ABI breakage, to avoid, a reserved_opts
bitfield is added to for the remaining bits available in the
structure.
Now for every new sa option, these reserved_opts can be reduced
and new option can be added.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v2: rebase and removed libabigail.abignore change.
Exception may be added when there is a need for change.
lib/security/rte_security.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 7eb9f109ae..c0ea13892e 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -258,6 +258,12 @@ struct rte_security_ipsec_sa_options {
* PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
*/
uint32_t l4_csum_enable : 1;
+
+ /** Reserved bit fields for future extension
+ *
+ * Note: reduce number of bits in reserved_opts for every new option
+ */
+ uint32_t reserved_opts : 18;
};
/** IPSec security association direction */
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
@ 2021-10-08 20:45 3% ` Akhil Goyal
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Akhil Goyal @ 2021-10-08 20:45 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
adwivedi, ciara.power, Akhil Goyal
Remove *_LIST_END enumerators from asymmetric crypto
lib to avoid ABI breakage for every new addition in
enums.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
v2: no change
app/test/test_cryptodev_asym.c | 4 ++--
drivers/crypto/qat/qat_asym.c | 2 +-
lib/cryptodev/rte_crypto_asym.h | 4 ----
3 files changed, 3 insertions(+), 7 deletions(-)
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9d19a6d6d9..603b2e4609 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -541,7 +541,7 @@ test_one_case(const void *test_case, int sessionless)
printf(" %u) TestCase %s %s\n", test_index++,
tc.modex.description, test_msg);
} else {
- for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
+ for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
if (tc.rsa_data.op_type_flags & (1 << i)) {
if (tc.rsa_data.key_exp) {
@@ -1027,7 +1027,7 @@ static inline void print_asym_capa(
rte_crypto_asym_xform_strings[capa->xform_type]);
printf("operation supported -");
- for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
+ for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
/* check supported operations */
if (rte_cryptodev_asym_xform_capability_check_optype(capa, i))
printf(" %s",
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 85973812a8..026625a4d2 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -742,7 +742,7 @@ qat_asym_session_configure(struct rte_cryptodev *dev,
err = -EINVAL;
goto error;
}
- } else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
+ } else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
|| xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
QAT_LOG(ERR, "Invalid asymmetric crypto xform");
err = -EINVAL;
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9c866f553f..5edf658572 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -94,8 +94,6 @@ enum rte_crypto_asym_xform_type {
*/
RTE_CRYPTO_ASYM_XFORM_ECPM,
/**< Elliptic Curve Point Multiplication */
- RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
- /**< End of list */
};
/**
@@ -116,7 +114,6 @@ enum rte_crypto_asym_op_type {
/**< DH Public Key generation operation */
RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
/**< DH Shared Secret compute operation */
- RTE_CRYPTO_ASYM_OP_LIST_END
};
/**
@@ -133,7 +130,6 @@ enum rte_crypto_rsa_padding_type {
/**< RSA PKCS#1 OAEP padding scheme */
RTE_CRYPTO_RSA_PADDING_PSS,
/**< RSA PKCS#1 PSS padding scheme */
- RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
};
/**
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
` (4 preceding siblings ...)
2021-10-07 11:27 9% ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-08 18:13 0% ` Slava Ovsiienko
2021-10-11 9:22 0% ` Andrew Rybchenko
6 siblings, 0 replies; 200+ results
From: Slava Ovsiienko @ 2021-10-08 18:13 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
Matan Azrad, sthemmin, NBU-Contact-longli, heinrich.kuhn,
kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
maxime.coquelin, chenbo.xia, NBU-Contact-Thomas Monjalon,
ferruh.yigit, mdr, jay.jayatheerthan
Hi,
I've reviewed the series, and it looks good to me.
I see we did not introduce new indirect referencing on the datapath
(just replaced rte_eth_devices[] being hidden with the new rte_eth_fp_ops[].)
My only concern - we'll get two places where pointers to the PMDs routines are stored,
and it means potential unsynchro between them, but I do not see the actual scenario for that.
For example, mlx5 PMD proposes multiple tx/rx_burst routines, and actual routine
selection happens on dev_start(), but then it is not supposed to be changed till dev_stop().
The internal PMD checks like this:
"if (dev->rx_pkt_burst == mlx5_rx_burst)"
supposed to continue working OK as well.
Hence, for series:
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
With best regards,
Slava
> -----Original Message-----
> From: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Sent: Thursday, October 7, 2021 14:28
> To: dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com; Matan
> Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> sthemmin@microsoft.com; NBU-Contact-longli <longli@microsoft.com>;
> heinrich.kuhn@corigine.com; kirankumark@marvell.com;
> andrew.rybchenko@oktetlabs.ru; mczekaj@marvell.com;
> jiawenwu@trustnetic.com; jianwang@trustnetic.com;
> maxime.coquelin@redhat.com; chenbo.xia@intel.com; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com; Konstantin Ananyev
> <konstantin.ananyev@intel.com>
> Subject: [PATCH v5 0/7] hide eth dev related structures
>
> v5 changes:
> - Fix spelling (Thomas/David)
> - Rename internal helper functions (David)
> - Reorder patches and update commit messages (Thomas)
> - Update comments (Thomas)
> - Changed layout in rte_eth_fp_ops, to group functions and
> related data based on their functionality:
> first 64B line for Rx, second one for Tx.
> Didn't observe any real performance difference comparing to
> original layout. Though decided to keep a new one, as it seems
> a bit more plausible.
>
> v4 changes:
> - Fix secondary process attach (Pavan)
> - Fix build failure (Ferruh)
> - Update lib/ethdev/verion.map (Ferruh)
> Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
> section makes checkpatch.sh to complain.
>
> v3 changes:
> - Changes in public struct naming (Jerin/Haiyue)
> - Split patches
> - Update docs
> - Shamelessly included Andrew's patch:
> https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-
> 1-andrew.rybchenko@oktetlabs.ru/
> into these series.
> I have to do similar thing here, so decided to avoid duplicated effort.
>
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to DPDK
> and not visible to the user.
> That should allow future possible changes to core ethdev related structures to
> be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
>
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> related data pointer from rte_eth_dev into a separate flat array.
> We keep it public to still be able to use inline functions for these
> 'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> Note that apart from function pointers itself, each element of this
> flat array also contains two opaque pointers for each ethdev:
> 1) a pointer to an array of internal queue data pointers
> 2) points to array of queue callback data pointers.
> Note that exposing this extra information allows us to avoid extra
> changes inside PMD level, plus should help to avoid possible
> performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
> (rte_eth_rx_burst(), etc.) to use new public flat array.
> While it is an ABI breakage, this change is intended to be transparent
> for both users (no changes in user app is required) and PMD developers
> (no changes in PMD is required).
> One extra note - with new implementation RX/TX callback invocation
> will cost one extra function call with this changes. That might cause
> some slowdown for code-path with RX/TX callbacks heavily involved.
> Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> things into internal header: <ethdev_driver.h>.
>
> That approach was selected to:
> - Avoid(/minimize) possible performance losses.
> - Minimize required changes inside PMDs.
>
> Performance testing results (ICX 2.0GHz, E810 (ice)):
> - testpmd macswap fwd mode, plus
> a) no RX/TX callbacks:
> no actual slowdown observed
> b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> ~2% slowdown
> - l3fwd: no actual slowdown observed
>
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy and
> provide your feedback.
>
> Andrew Rybchenko (1):
> ethdev: remove legacy Rx descriptor done API
>
> Konstantin Ananyev (6):
> ethdev: allocate max space for internal queue array
> ethdev: change input parameters for rx_queue_count
> ethdev: copy fast-path API into separate structure
> ethdev: make fast-path functions to use new flat array
> ethdev: add API to retrieve multiple ethernet addresses
> ethdev: hide eth dev related structures
>
> app/test-pmd/config.c | 23 +-
> doc/guides/nics/features.rst | 6 +-
> doc/guides/rel_notes/deprecation.rst | 5 -
> doc/guides/rel_notes/release_21_11.rst | 21 ++
> drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
> drivers/net/ark/ark_ethdev_rx.c | 4 +-
> drivers/net/ark/ark_ethdev_rx.h | 3 +-
> drivers/net/atlantic/atl_ethdev.h | 2 +-
> drivers/net/atlantic/atl_rxtx.c | 9 +-
> drivers/net/bnxt/bnxt_ethdev.c | 8 +-
> drivers/net/cxgbe/base/adapter.h | 2 +-
> drivers/net/dpaa/dpaa_ethdev.c | 9 +-
> drivers/net/dpaa2/dpaa2_ethdev.c | 9 +-
> drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
> drivers/net/e1000/e1000_ethdev.h | 10 +-
> drivers/net/e1000/em_ethdev.c | 1 -
> drivers/net/e1000/em_rxtx.c | 21 +-
> drivers/net/e1000/igb_ethdev.c | 2 -
> drivers/net/e1000/igb_rxtx.c | 21 +-
> drivers/net/enic/enic_ethdev.c | 12 +-
> drivers/net/fm10k/fm10k.h | 5 +-
> drivers/net/fm10k/fm10k_ethdev.c | 1 -
> drivers/net/fm10k/fm10k_rxtx.c | 29 +-
> drivers/net/hns3/hns3_rxtx.c | 7 +-
> drivers/net/hns3/hns3_rxtx.h | 2 +-
> drivers/net/i40e/i40e_ethdev.c | 1 -
> drivers/net/i40e/i40e_rxtx.c | 30 +-
> drivers/net/i40e/i40e_rxtx.h | 4 +-
> drivers/net/iavf/iavf_rxtx.c | 4 +-
> drivers/net/iavf/iavf_rxtx.h | 2 +-
> drivers/net/ice/ice_rxtx.c | 4 +-
> drivers/net/ice/ice_rxtx.h | 2 +-
> drivers/net/igc/igc_ethdev.c | 1 -
> drivers/net/igc/igc_txrx.c | 23 +-
> drivers/net/igc/igc_txrx.h | 5 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 2 -
> drivers/net/ixgbe/ixgbe_ethdev.h | 5 +-
> drivers/net/ixgbe/ixgbe_rxtx.c | 22 +-
> drivers/net/mlx5/mlx5_rx.c | 26 +-
> drivers/net/mlx5/mlx5_rx.h | 2 +-
> drivers/net/netvsc/hn_rxtx.c | 4 +-
> drivers/net/netvsc/hn_var.h | 3 +-
> drivers/net/nfp/nfp_rxtx.c | 4 +-
> drivers/net/nfp/nfp_rxtx.h | 3 +-
> drivers/net/octeontx2/otx2_ethdev.c | 1 -
> drivers/net/octeontx2/otx2_ethdev.h | 3 +-
> drivers/net/octeontx2/otx2_ethdev_ops.c | 20 +-
> drivers/net/sfc/sfc_ethdev.c | 29 +-
> drivers/net/thunderx/nicvf_ethdev.c | 3 +-
> drivers/net/thunderx/nicvf_rxtx.c | 4 +-
> drivers/net/thunderx/nicvf_rxtx.h | 2 +-
> drivers/net/txgbe/txgbe_ethdev.h | 3 +-
> drivers/net/txgbe/txgbe_rxtx.c | 4 +-
> drivers/net/vhost/rte_eth_vhost.c | 4 +-
> drivers/net/virtio/virtio_ethdev.c | 1 -
> lib/ethdev/ethdev_driver.h | 148 +++++++++
> lib/ethdev/ethdev_private.c | 83 +++++
> lib/ethdev/ethdev_private.h | 7 +
> lib/ethdev/rte_ethdev.c | 89 ++++--
> lib/ethdev/rte_ethdev.h | 288 ++++++++++++------
> lib/ethdev/rte_ethdev_core.h | 171 +++--------
> lib/ethdev/version.map | 8 +-
> lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
> lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
> lib/eventdev/rte_eventdev.c | 2 +-
> lib/metrics/rte_metrics_telemetry.c | 2 +-
> 67 files changed, 677 insertions(+), 564 deletions(-)
>
> --
> 2.26.3
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v6] ethdev: fix representor port ID search by name
@ 2021-10-08 9:27 4% ` Andrew Rybchenko
2021-10-11 12:30 4% ` [dpdk-dev] [PATCH v7] " Andrew Rybchenko
2021-10-11 12:53 4% ` [dpdk-dev] [PATCH v8] " Andrew Rybchenko
3 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-08 9:27 UTC (permalink / raw)
To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
Cc: dev, Viacheslav Galaktionov, Xueming Li
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
The patch is required for all PMDs which do not provide representors
info on the representor itself.
The function, rte_eth_representor_id_get(), is used in
eth_representor_cmp() which is required in ethdev class iterator to
search ethdev port ID by name (representor case). Before the patch
the function is called on the representor itself and tries to get
representors info to match.
Search of port ID by name is used after hotplug to find out port ID
of the just plugged device.
Getting a list of representors from a representor does not make sense.
Instead, a backer device should be used.
To this end, extend the rte_eth_dev_data structure to include the port ID
of the backing device for representors.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].
It should not be a problem anyway since 21.11 is a ABI breaking release.
Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new backer_port_id field in
rte_eth_dev_data structure. Get ID by name will not work.
mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
v6:
- provide more information in the changeset description
v5:
- try to improve name: backer_port_id instead of parent_port_id
- init new field to RTE_MAX_ETHPORTS on allocation to avoid
zero port usage by default
v4:
- apply mlx5 review notes: remove fallback from generic ethdev
code and add fallback to mlx5 code to handle legacy usecase
v3:
- fix mlx5 build breakage
v2:
- fix mlx5 review notes
- try device port ID first before parent in order to address
backward compatibility issue
drivers/net/bnxt/bnxt_reps.c | 1 +
drivers/net/enic/enic_vf_representor.c | 1 +
drivers/net/i40e/i40e_vf_representor.c | 1 +
drivers/net/ice/ice_dcf_vf_representor.c | 1 +
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/linux/mlx5_os.c | 13 +++++++++++++
drivers/net/mlx5/windows/mlx5_os.c | 13 +++++++++++++
lib/ethdev/ethdev_driver.h | 6 +++---
lib/ethdev/rte_class_eth.c | 2 +-
lib/ethdev/rte_ethdev.c | 9 +++++----
lib/ethdev/rte_ethdev_core.h | 6 ++++++
11 files changed, 46 insertions(+), 8 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index df05619c3f..b7e88e013a 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = rep_params->vf_id;
+ eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index cfd02c03cc..1a4411844a 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -666,6 +666,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = vf->vf_id;
+ eth_dev->data->backer_port_id = pf->port_id;
eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
sizeof(struct rte_ether_addr) *
ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..d65b821a01 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->backer_port_id = pf->dev_data->port_id;
/* Setting the number queues allocated to the VF */
ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f91..c5335ac3cc 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -426,6 +426,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
vf_rep_eth_dev->data->representor_id = repr->vf_id;
+ vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..9fa75984fb 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
/* Set representor device ops */
ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 3746057673..612340b3b6 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->backer_port_id = port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS)
+ eth_dev->data->backer_port_id = eth_dev->data->port_id;
}
priv->mp_id.port_id = eth_dev->data->port_id;
strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 26fa927039..a9c244c7dc 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->backer_port_id = port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS)
+ eth_dev->data->backer_port_id = eth_dev->data->port_id;
}
/*
* Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 7ce0f7729a..c4ea735732 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1266,8 +1266,8 @@ struct rte_eth_devargs {
* For backward compatibility, if no representor info, direct
* map legacy VF (no controller and pf).
*
- * @param ethdev
- * Handle of ethdev port.
+ * @param port_id
+ * Port ID of the backing device.
* @param type
* Representor type.
* @param controller
@@ -1284,7 +1284,7 @@ struct rte_eth_devargs {
*/
__rte_internal
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..eda216ced5 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
c = i / (np * nf);
p = (i / nf) % np;
f = i % nf;
- if (rte_eth_representor_id_get(edev,
+ if (rte_eth_representor_id_get(edev->data->backer_port_id,
eth_da.type,
eth_da.nb_mh_controllers == 0 ? -1 :
eth_da.mh_controllers[c],
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b..ed7b43a99f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
eth_dev = eth_dev_get(port_id);
strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
eth_dev->data->port_id = port_id;
+ eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
eth_dev->data->mtu = RTE_ETHER_MTU;
pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
@@ -5915,7 +5916,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
}
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id)
@@ -5931,7 +5932,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
return -EINVAL;
/* Get PMD representor range info. */
- ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+ ret = rte_eth_representor_info_get(port_id, NULL);
if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
controller == -1 && pf == -1) {
/* Direct mapping for legacy VF representor. */
@@ -5946,7 +5947,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
if (info == NULL)
return -ENOMEM;
info->nb_ranges_alloc = n;
- ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+ ret = rte_eth_representor_info_get(port_id, info);
if (ret < 0)
goto out;
@@ -5965,7 +5966,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
continue;
if (info->ranges[i].id_end < info->ranges[i].id_base) {
RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
- ethdev->data->port_id, info->ranges[i].id_base,
+ port_id, info->ranges[i].id_base,
info->ranges[i].id_end, i);
continue;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..66ad8b13c8 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,12 @@ struct rte_eth_dev_data {
/**< Switch-specific identifier.
* Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
*/
+ uint16_t backer_port_id;
+ /**< Port ID of the backing device.
+ * This device will be used to query representor
+ * info and calculate representor IDs.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
--
2.30.2
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v5] ethdev: fix representor port ID search by name
@ 2021-10-08 8:39 0% ` Xueming(Steven) Li
0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-10-08 8:39 UTC (permalink / raw)
To: johndale, qi.z.zhang, Slava Ovsiienko, somnath.kotur,
ajit.khaparde, andrew.rybchenko, Matan Azrad, hyonkim,
qiming.yang
Cc: beilei.xing, NBU-Contact-Thomas Monjalon, dev, haiyue.wang,
viacheslav.galaktionov, ferruh.yigit
On Fri, 2021-10-01 at 14:39 +0300, Andrew Rybchenko wrote:
> Hello PMD maintainers,
>
> please, review the patch.
>
> It is especially important for net/mlx5 since changes there are
> not trivial.
>
> Thanks,
> Andrew.
>
> On 9/13/21 2:26 PM, Andrew Rybchenko wrote:
> > From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> >
> > Getting a list of representors from a representor does not make sense.
> > Instead, a parent device should be used.
> >
> > To this end, extend the rte_eth_dev_data structure to include the port ID
> > of the backing device for representors.
> >
> > Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> > Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> > Acked-by: Beilei Xing <beilei.xing@intel.com>
> > ---
> > The new field is added into the hole in rte_eth_dev_data structure.
> > The patch does not change ABI, but extra care is required since ABI
> > check is disabled for the structure because of the libabigail bug [1].
> > It should not be a problem anyway since 21.11 is a ABI breaking release.
> >
> > Potentially it is bad for out-of-tree drivers which implement
> > representors but do not fill in a new parert_port_id field in
> > rte_eth_dev_data structure. Get ID by name will not work.
> >
> > mlx5 changes should be reviwed by maintainers very carefully, since
> > we are not sure if we patch it correctly.
> >
> > [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> >
> > v5:
> > - try to improve name: backer_port_id instead of parent_port_id
> > - init new field to RTE_MAX_ETHPORTS on allocation to avoid
> > zero port usage by default
> >
> > v4:
> > - apply mlx5 review notes: remove fallback from generic ethdev
> > code and add fallback to mlx5 code to handle legacy usecase
> >
> > v3:
> > - fix mlx5 build breakage
> >
> > v2:
> > - fix mlx5 review notes
> > - try device port ID first before parent in order to address
> > backward compatibility issue
> >
> > drivers/net/bnxt/bnxt_reps.c | 1 +
> > drivers/net/enic/enic_vf_representor.c | 1 +
> > drivers/net/i40e/i40e_vf_representor.c | 1 +
> > drivers/net/ice/ice_dcf_vf_representor.c | 1 +
> > drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> > drivers/net/mlx5/linux/mlx5_os.c | 13 +++++++++++++
> > drivers/net/mlx5/windows/mlx5_os.c | 13 +++++++++++++
> > lib/ethdev/ethdev_driver.h | 6 +++---
> > lib/ethdev/rte_class_eth.c | 2 +-
> > lib/ethdev/rte_ethdev.c | 9 +++++----
> > lib/ethdev/rte_ethdev_core.h | 6 ++++++
> > 11 files changed, 46 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
> > index bdbad53b7d..0d50c0f1da 100644
> > --- a/drivers/net/bnxt/bnxt_reps.c
> > +++ b/drivers/net/bnxt/bnxt_reps.c
> > @@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
> > eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> > RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> > eth_dev->data->representor_id = rep_params->vf_id;
> > + eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
> >
> > rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
> > memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
> > diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
> > index 79dd6e5640..fedb09ecd6 100644
> > --- a/drivers/net/enic/enic_vf_representor.c
> > +++ b/drivers/net/enic/enic_vf_representor.c
> > @@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
> > eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> > RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> > eth_dev->data->representor_id = vf->vf_id;
> > + eth_dev->data->backer_port_id = pf->port_id;
> > eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
> > sizeof(struct rte_ether_addr) *
> > ENIC_UNICAST_PERFECT_FILTERS, 0);
> > diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
> > index 0481b55381..d65b821a01 100644
> > --- a/drivers/net/i40e/i40e_vf_representor.c
> > +++ b/drivers/net/i40e/i40e_vf_representor.c
> > @@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
> > ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> > RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> > ethdev->data->representor_id = representor->vf_id;
> > + ethdev->data->backer_port_id = pf->dev_data->port_id;
> >
> > /* Setting the number queues allocated to the VF */
> > ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
> > diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
> > index 970461f3e9..e51d0aa6b9 100644
> > --- a/drivers/net/ice/ice_dcf_vf_representor.c
> > +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> > @@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
> >
> > vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> > vf_rep_eth_dev->data->representor_id = repr->vf_id;
> > + vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
> >
> > vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
> > index d5b636a194..9fa75984fb 100644
> > --- a/drivers/net/ixgbe/ixgbe_vf_representor.c
> > +++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
> > @@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
> >
> > ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> > ethdev->data->representor_id = representor->vf_id;
> > + ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
> >
> > /* Set representor device ops */
> > ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
> > diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
> > index 470b16cb9a..1cddaaba1a 100644
> > --- a/drivers/net/mlx5/linux/mlx5_os.c
> > +++ b/drivers/net/mlx5/linux/mlx5_os.c
> > @@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> > if (priv->representor) {
> > eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> > eth_dev->data->representor_id = priv->representor_id;
> > + MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
> > + struct mlx5_priv *opriv =
> > + rte_eth_devices[port_id].data->dev_private;
> > + if (opriv &&
> > + opriv->master &&
> > + opriv->domain_id == priv->domain_id &&
> > + opriv->sh == priv->sh) {
> > + eth_dev->data->backer_port_id = port_id;
> > + break;
> > + }
> > + }
> > + if (port_id >= RTE_MAX_ETHPORTS)
> > + eth_dev->data->backer_port_id = eth_dev->data->port_id;
> > }
> > priv->mp_id.port_id = eth_dev->data->port_id;
> > strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
> > diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
> > index 26fa927039..a9c244c7dc 100644
> > --- a/drivers/net/mlx5/windows/mlx5_os.c
> > +++ b/drivers/net/mlx5/windows/mlx5_os.c
> > @@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> > if (priv->representor) {
> > eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> > eth_dev->data->representor_id = priv->representor_id;
> > + MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
> > + struct mlx5_priv *opriv =
> > + rte_eth_devices[port_id].data->dev_private;
> > + if (opriv &&
> > + opriv->master &&
> > + opriv->domain_id == priv->domain_id &&
> > + opriv->sh == priv->sh) {
> > + eth_dev->data->backer_port_id = port_id;
> > + break;
> > + }
> > + }
> > + if (port_id >= RTE_MAX_ETHPORTS)
> > + eth_dev->data->backer_port_id = eth_dev->data->port_id;
> > }
> > /*
> > * Store associated network device interface index. This index
> > diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> > index 40e474aa7e..b940e6cb38 100644
> > --- a/lib/ethdev/ethdev_driver.h
> > +++ b/lib/ethdev/ethdev_driver.h
> > @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
> > * For backward compatibility, if no representor info, direct
> > * map legacy VF (no controller and pf).
> > *
> > - * @param ethdev
> > - * Handle of ethdev port.
> > + * @param port_id
> > + * Port ID of the backing device.
> > * @param type
> > * Representor type.
> > * @param controller
> > @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
> > */
> > __rte_internal
> > int
> > -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> > +rte_eth_representor_id_get(uint16_t port_id,
> > enum rte_eth_representor_type type,
> > int controller, int pf, int representor_port,
> > uint16_t *repr_id);
> > diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
> > index 1fe5fa1f36..eda216ced5 100644
> > --- a/lib/ethdev/rte_class_eth.c
> > +++ b/lib/ethdev/rte_class_eth.c
> > @@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
> > c = i / (np * nf);
> > p = (i / nf) % np;
> > f = i % nf;
> > - if (rte_eth_representor_id_get(edev,
> > + if (rte_eth_representor_id_get(edev->data->backer_port_id,
> > eth_da.type,
> > eth_da.nb_mh_controllers == 0 ? -1 :
> > eth_da.mh_controllers[c],
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index daf5ca9242..7c9b0d6b3b 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
> > eth_dev = eth_dev_get(port_id);
> > strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
> > eth_dev->data->port_id = port_id;
> > + eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
> > eth_dev->data->mtu = RTE_ETHER_MTU;
> > pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
> >
> > @@ -5996,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
> > }
> >
> > int
> > -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> > +rte_eth_representor_id_get(uint16_t port_id,
> > enum rte_eth_representor_type type,
> > int controller, int pf, int representor_port,
> > uint16_t *repr_id)
> > @@ -6012,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> > return -EINVAL;
> >
> > /* Get PMD representor range info. */
> > - ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
> > + ret = rte_eth_representor_info_get(port_id, NULL);
> > if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
> > controller == -1 && pf == -1) {
> > /* Direct mapping for legacy VF representor. */
> > @@ -6027,7 +6028,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> > if (info == NULL)
> > return -ENOMEM;
> > info->nb_ranges_alloc = n;
> > - ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
> > + ret = rte_eth_representor_info_get(port_id, info);
> > if (ret < 0)
> > goto out;
> >
> > @@ -6046,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> > continue;
> > if (info->ranges[i].id_end < info->ranges[i].id_base) {
> > RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
> > - ethdev->data->port_id, info->ranges[i].id_base,
> > + port_id, info->ranges[i].id_base,
> > info->ranges[i].id_end, i);
> > continue;
> >
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index edf96de2dc..48b814e8a1 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -185,6 +185,12 @@ struct rte_eth_dev_data {
> > /**< Switch-specific identifier.
> > * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> > */
> > + uint16_t backer_port_id;
> > + /**< Port ID of the backing device.
> > + * This device will be used to query representor
> > + * info and calculate representor IDs.
> > + * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> > + */
> >
> > pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
> > uint64_t reserved_64s[4]; /**< Reserved for future fields */
> >
>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
2021-10-08 7:08 0% ` David Marchand
@ 2021-10-08 7:44 0% ` Liu, Changpeng
2021-10-11 6:58 0% ` Xia, Chenbo
0 siblings, 1 reply; 200+ results
From: Liu, Changpeng @ 2021-10-08 7:44 UTC (permalink / raw)
To: David Marchand, Harris, James R
Cc: Xia, Chenbo, dev, ci, Aaron Conole, dpdklab, Zawadzki, Tomasz, alexeymar
Thanks, I have worked with Chenbo to address this issue before. After enable the `ALLOW_INTERNAL_API` option, it works now with SPDK.
Another issue raised by Jim Harris is that for distro packaged DPDK, since this option isn't enabled by default, this will not allow SPDK
to use the distro packaged DPDK after this release.
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Friday, October 8, 2021 3:08 PM
> To: Liu, Changpeng <changpeng.liu@intel.com>
> Cc: Xia, Chenbo <chenbo.xia@intel.com>; Harris, James R
> <james.r.harris@intel.com>; dev@dpdk.org; ci@dpdk.org; Aaron Conole
> <aconole@redhat.com>; dpdklab <dpdklab@iol.unh.edu>; Zawadzki, Tomasz
> <tomasz.zawadzki@intel.com>; alexeymar@mellanox.com
> Subject: Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
>
> Hello,
>
> On Fri, Oct 8, 2021 at 8:15 AM Liu, Changpeng <changpeng.liu@intel.com> wrote:
> >
> > I tried the above DPDK patches, and got the following errors:
> >
> > pci.c:115:7: error: call to ‘rte_pci_read_config’ declared with attribute error:
> Symbol is not public ABI
> > 115 | rc = rte_pci_read_config(dev->dev_handle, value, len, offset);
> > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > pci.c: In function ‘cfg_write_rte’:
> > pci.c:125:7: error: call to ‘rte_pci_write_config’ declared with attribute error:
> Symbol is not public ABI
> > 125 | rc = rte_pci_write_config(dev->dev_handle, value, len, offset);
> > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > pci.c: In function ‘register_rte_driver’:
> > pci.c:375:2: error: call to ‘rte_pci_register’ declared with attribute error:
> Symbol is not public ABI
> > 375 | rte_pci_register(&driver->driver);
>
> I should have got this warning... but compilation passed fine for me.
> Happy you tested it.
>
> >
> > We may use the new added API to replace rte_pci_write_config and
> rte_pci_read_config, but SPDK
> > do require rte_pci_register().
>
> Since SPDK has a PCI driver, you'll need to compile code that calls
> those PCI driver internal API with ALLOW_INTERNAL_API defined.
> You can probably add a #define ALLOW_INTERNAL_API first thing (it's
> important to have it defined before including any dpdk header) in
> pci.c
>
> Another option, is to add it to lib/env_dpdk/env.mk:ENV_CFLAGS =
> $(DPDK_INC) -DALLOW_EXPERIMENTAL_API.
>
> Can someone from SPDK take over this and sync with Chenbo?
>
>
> Thanks.
>
> --
> David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
2021-10-08 6:15 4% ` Liu, Changpeng
@ 2021-10-08 7:08 0% ` David Marchand
2021-10-08 7:44 0% ` Liu, Changpeng
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-08 7:08 UTC (permalink / raw)
To: Liu, Changpeng
Cc: Xia, Chenbo, Harris, James R, dev, ci, Aaron Conole, dpdklab,
Zawadzki, Tomasz, alexeymar
Hello,
On Fri, Oct 8, 2021 at 8:15 AM Liu, Changpeng <changpeng.liu@intel.com> wrote:
>
> I tried the above DPDK patches, and got the following errors:
>
> pci.c:115:7: error: call to ‘rte_pci_read_config’ declared with attribute error: Symbol is not public ABI
> 115 | rc = rte_pci_read_config(dev->dev_handle, value, len, offset);
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> pci.c: In function ‘cfg_write_rte’:
> pci.c:125:7: error: call to ‘rte_pci_write_config’ declared with attribute error: Symbol is not public ABI
> 125 | rc = rte_pci_write_config(dev->dev_handle, value, len, offset);
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> pci.c: In function ‘register_rte_driver’:
> pci.c:375:2: error: call to ‘rte_pci_register’ declared with attribute error: Symbol is not public ABI
> 375 | rte_pci_register(&driver->driver);
I should have got this warning... but compilation passed fine for me.
Happy you tested it.
>
> We may use the new added API to replace rte_pci_write_config and rte_pci_read_config, but SPDK
> do require rte_pci_register().
Since SPDK has a PCI driver, you'll need to compile code that calls
those PCI driver internal API with ALLOW_INTERNAL_API defined.
You can probably add a #define ALLOW_INTERNAL_API first thing (it's
important to have it defined before including any dpdk header) in
pci.c
Another option, is to add it to lib/env_dpdk/env.mk:ENV_CFLAGS =
$(DPDK_INC) -DALLOW_EXPERIMENTAL_API.
Can someone from SPDK take over this and sync with Chenbo?
Thanks.
--
David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
@ 2021-10-08 6:41 4% ` zhihongx.peng
2021-10-11 5:20 0% ` Peng, ZhihongX
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: zhihongx.peng @ 2021-10-08 6:41 UTC (permalink / raw)
To: olivier.matz, dmitry.kozliuk; +Cc: dev, Zhihong Peng, stable
From: Zhihong Peng <zhihongx.peng@intel.com>
Malloc cl in the cmdline_stdin_new function, so release in the
cmdline_stdin_exit function is logical, so that cl will not be
released alone.
Fixes: af75078fece3 (first public release)
Cc: stable@dpdk.org
Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/cmdline/cmdline_socket.c | 1 +
2 files changed, 6 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index efeffe37a0..be24925d16 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -191,6 +191,11 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* cmdline: The API cmdline_stdin_exit has added cmdline_free function.
+ Malloc cl in the cmdline_stdin_new function, so release in the
+ cmdline_stdin_exit function is logical. The application code
+ that calls cmdline_free needs to be deleted.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
index 998e8ade25..ebd5343754 100644
--- a/lib/cmdline/cmdline_socket.c
+++ b/lib/cmdline/cmdline_socket.c
@@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
return;
terminal_restore(cl);
+ cmdline_free(cl);
}
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
@ 2021-10-08 6:15 4% ` Liu, Changpeng
2021-10-08 7:08 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Liu, Changpeng @ 2021-10-08 6:15 UTC (permalink / raw)
To: Xia, Chenbo, Harris, James R, David Marchand
Cc: dev, ci, Aaron Conole, dpdklab, Zawadzki, Tomasz, alexeymar
I tried the above DPDK patches, and got the following errors:
pci.c:115:7: error: call to ‘rte_pci_read_config’ declared with attribute error: Symbol is not public ABI
115 | rc = rte_pci_read_config(dev->dev_handle, value, len, offset);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pci.c: In function ‘cfg_write_rte’:
pci.c:125:7: error: call to ‘rte_pci_write_config’ declared with attribute error: Symbol is not public ABI
125 | rc = rte_pci_write_config(dev->dev_handle, value, len, offset);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pci.c: In function ‘register_rte_driver’:
pci.c:375:2: error: call to ‘rte_pci_register’ declared with attribute error: Symbol is not public ABI
375 | rte_pci_register(&driver->driver);
We may use the new added API to replace rte_pci_write_config and rte_pci_read_config, but SPDK
do require rte_pci_register().
> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia@intel.com>
> Sent: Wednesday, October 6, 2021 12:26 PM
> To: Harris, James R <james.r.harris@intel.com>; David Marchand
> <david.marchand@redhat.com>; Liu, Changpeng <changpeng.liu@intel.com>
> Cc: dev@dpdk.org; ci@dpdk.org; Aaron Conole <aconole@redhat.com>; dpdklab
> <dpdklab@iol.unh.edu>; Zawadzki, Tomasz <tomasz.zawadzki@intel.com>;
> alexeymar@mellanox.com
> Subject: RE: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
>
> Thanks David for helping check this and including SPDK folks!
>
> Hi Changpeng,
>
> Although we have synced about this during last release's deprecation notice,
> I’d like to summarize two points for SPDK to change if this patchset applied.
>
> 1. The pci bus header for drivers will only be exposed if meson option
> 'enable_driver_sdk' is added, so SPDK need this DPDK meson option to build.
>
> 2. As some functions in pci bus is needed for apps and the rest for drivers,
> the header for driver is renamed to pci_driver.h (header for app is rte_bus_pci.h).
> So SPDK drivers will need pci_driver.h instead of rte_bus_pci.h starting from
> DPDK
> 21.11. David showed some tests he did below.
>
> Could you help check above two updates are fine to SPDK?
>
> Thanks,
> Chenbo
>
> > -----Original Message-----
> > From: Harris, James R <james.r.harris@intel.com>
> > Sent: Monday, October 4, 2021 11:56 PM
> > To: David Marchand <david.marchand@redhat.com>; Xia, Chenbo
> > <chenbo.xia@intel.com>; Liu, Changpeng <changpeng.liu@intel.com>
> > Cc: dev@dpdk.org; ci@dpdk.org; Aaron Conole <aconole@redhat.com>;
> dpdklab
> > <dpdklab@iol.unh.edu>; Zawadzki, Tomasz <tomasz.zawadzki@intel.com>;
> > alexeymar@mellanox.com
> > Subject: Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
> >
> > Adding Changpeng Liu from SPDK side.
> >
> > On 10/4/21, 6:48 AM, "David Marchand" <david.marchand@redhat.com>
> wrote:
> >
> > On Thu, Sep 30, 2021 at 10:45 AM David Marchand
> > <david.marchand@redhat.com> wrote:
> > > On Wed, Sep 29, 2021 at 9:38 AM Xia, Chenbo <chenbo.xia@intel.com>
> > wrote:
> > > > @David, could you help me understand what is the compile error in
> > Fedora 31?
> > > > DPDK_compile_spdk failure is expected as the header name for SPDK
> > is changed,
> > > > I am not sure if it's the same error...
> > >
> > > The error log is odd (no compilation "backtrace").
> > > You'll need to test spdk manually I guess.
> >
> > Tried your series with SPDK (w/o and w/ enable_driver_sdk).
> > I think the same, and the error is likely due to the file rename.
> >
> > $ make
> > CC lib/env_dpdk/env.o
> > In file included from env.c:39:0:
> > env_internal.h:64:25: error: field ‘driver’ has incomplete type
> > struct rte_pci_driver driver;
> > ^
> > env_internal.h:75:59: warning: ‘struct rte_pci_device’ declared inside
> > parameter list [enabled by default]
> > int pci_device_init(struct rte_pci_driver *driver, struct
> > rte_pci_device *device);
> > ^
> > env_internal.h:75:59: warning: its scope is only this definition or
> > declaration, which is probably not what you want [enabled by default]
> > env_internal.h:76:28: warning: ‘struct rte_pci_device’ declared inside
> > parameter list [enabled by default]
> > int pci_device_fini(struct rte_pci_device *device);
> > ^
> > env_internal.h:89:38: warning: ‘struct rte_pci_device’ declared inside
> > parameter list [enabled by default]
> > void vtophys_pci_device_added(struct rte_pci_device *pci_device);
> > ^
> > env_internal.h:96:40: warning: ‘struct rte_pci_device’ declared inside
> > parameter list [enabled by default]
> > void vtophys_pci_device_removed(struct rte_pci_device *pci_device);
> > ^
> > make[2]: *** [env.o] Error 1
> > make[1]: *** [env_dpdk] Error 2
> > make: *** [lib] Error 2
> >
> >
> >
> > So basically, SPDK needs some updates since it has its own pci drivers.
> > I copied some SPDK folks for info.
> >
> > *Disclaimer* I only checked it links fine against my 21.11 dpdk env,
> > and did not test the other cases:
> >
> > diff --git a/dpdkbuild/Makefile b/dpdkbuild/Makefile
> > index d51b1a6e5..0e666735d 100644
> > --- a/dpdkbuild/Makefile
> > +++ b/dpdkbuild/Makefile
> > @@ -166,6 +166,7 @@ all: $(SPDK_ROOT_DIR)/dpdk/build-tmp
> > $(SPDK_ROOT_DIR)/dpdk/build-tmp: $(SPDK_ROOT_DIR)/mk/cc.mk
> > $(SPDK_ROOT_DIR)/include/spdk/config.h
> > $(Q)rm -rf $(SPDK_ROOT_DIR)/dpdk/build
> > $(SPDK_ROOT_DIR)/dpdk/build-tmp
> > $(Q)cd "$(SPDK_ROOT_DIR)/dpdk"; CC="$(SUB_CC)" meson
> > --prefix="$(MESON_PREFIX)" --libdir lib -Dc_args="$(DPDK_CFLAGS)"
> > -Dc_link_args="$(DPDK_LDFLAGS)" $(DPDK_OPTS)
> > -Ddisable_drivers="$(shell echo $(DPDK_DISABLED_DRVERS) | sed -E "s/
> > +/,/g")" build-tmp
> > + $(Q)! meson configure build-tmp | grep -qw enable_driver_sdk
> > || meson configure build-tmp -Denable_driver_sdk=true
> > $(Q)sed $(SED_INPLACE_FLAG) 's/#define RTE_EAL_PMD_PATH
> > .*/#define RTE_EAL_PMD_PATH ""/g'
> > $(SPDK_ROOT_DIR)/dpdk/build-tmp/rte_build_config.h
> > $(Q) \
> > # TODO Meson build adds libbsd dependency when it's available.
> > This means any app will be \
> > diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
> > index cc7db8aab..e24c6942f 100644bits with an embedded dpdk
> > --- a/lib/env_dpdk/env.mk
> > +++ b/lib/env_dpdk/env.mk
> > @@ -172,6 +172,12 @@ DPDK_PRIVATE_LINKER_ARGS += -lnuma
> > endif
> > endif
> >
> > +ifneq (,$(wildcard $(DPDK_INC_DIR)/rte_build_config.h))
> > +ifneq (,$(shell grep -e "define RTE_HAS_LIBARCHIVE 1"
> > $(DPDK_INC_DIR)/rte_build_config.h))
> > +DPDK_PRIVATE_LINKER_ARGS += -larchive
> > +endif
> > +endif
> > +
> > ifeq ($(OS),Linux)
> > DPDK_PRIVATE_LINKER_ARGS += -ldl
> > endif
> > diff --git a/lib/env_dpdk/env_internal.h b/lib/env_dpdk/env_internal.h
> > index 2303f432c..24b377545 100644
> > --- a/lib/env_dpdk/env_internal.h
> > +++ b/lib/env_dpdk/env_internal.h
> > @@ -43,13 +43,18 @@
> > #include <rte_eal.h>
> > #include <rte_bus.h>
> > #include <rte_pci.h>
> > -#include <rte_bus_pci.h>
> > #include <rte_dev.h>
> >
> > #if RTE_VERSION < RTE_VERSION_NUM(19, 11, 0, 0)
> > #error RTE_VERSION is too old! Minimum 19.11 is required.
> > #endif
> >
> > +#if RTE_VERSION < RTE_VERSION_NUM(21, 11, 0, 0)
> > +#include <rte_bus_pci.h>
> > +#else
> > +#include <pci_driver.h>
> > +#endif
> > +
> > /* x86-64 and ARM userspace virtual addresses use only the low 48
> > bits [0..47],
> > * which is enough to cover 256 TB.
> > */
> >
> >
> >
> > --
> > David Marchand
> >
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
2021-10-07 18:58 0% ` Chautru, Nicolas
@ 2021-10-08 4:34 0% ` Nipun Gupta
0 siblings, 0 replies; 200+ results
From: Nipun Gupta @ 2021-10-08 4:34 UTC (permalink / raw)
To: Chautru, Nicolas, Akhil Goyal, dev, trix
Cc: thomas, Zhang, Mingshan, Joshi, Arun, Hemant Agrawal, david.marchand
> -----Original Message-----
> From: Chautru, Nicolas <nicolas.chautru@intel.com>
> Sent: Friday, October 8, 2021 12:29 AM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Akhil Goyal <gakhil@marvell.com>;
> dev@dpdk.org; trix@redhat.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data endianness
> assumption
>
> Hi Nipun,
>
> > -----Original Message-----
> > From: Nipun Gupta <nipun.gupta@nxp.com>
> > Sent: Thursday, October 7, 2021 9:49 AM
> > To: Chautru, Nicolas <nicolas.chautru@intel.com>; Akhil Goyal
> > <gakhil@marvell.com>; dev@dpdk.org; trix@redhat.com
> > Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> > Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> > <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > endianness assumption
> >
> >
> >
> > > -----Original Message-----
> > > From: Chautru, Nicolas <nicolas.chautru@intel.com>
> > > Sent: Thursday, October 7, 2021 9:12 PM
> > > To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org; Nipun Gupta
> > > <nipun.gupta@nxp.com>; trix@redhat.com
> > > Cc: thomas@monjalon.net; Zhang, Mingshan
> > <mingshan.zhang@intel.com>;
> > > Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> > > <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> > > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > > endianness assumption
> > >
> > > Hi Akhil,
> > >
> > >
> > > > -----Original Message-----
> > > > From: Akhil Goyal <gakhil@marvell.com>
> > > > Sent: Thursday, October 7, 2021 6:14 AM
> > > > To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> > > > nipun.gupta@nxp.com; trix@redhat.com
> > > > Cc: thomas@monjalon.net; Zhang, Mingshan
> > <mingshan.zhang@intel.com>;
> > > > Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> > > > david.marchand@redhat.com
> > > > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > > > endianness assumption
> > > >
> > > > > Subject: [EXT] [PATCH v9] bbdev: add device info related to data
> > > > > endianness assumption
> > > > >
> > > > Title is too long.
> > > > bbdev: add dev info for data endianness
> > >
> > > OK
> > >
> > > >
> > > > > Adding device information to capture explicitly the assumption of
> > > > > the input/output data byte endianness being processed.
> > > > >
> > > > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > > > ---
> > > > > doc/guides/rel_notes/release_21_11.rst | 1 +
> > > > > drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
> > > > > drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> > > > > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
> > > > > drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
> > > > > lib/bbdev/rte_bbdev.h | 8 ++++++++
> > > > > 6 files changed, 13 insertions(+)
> > > > >
> > > > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > > > b/doc/guides/rel_notes/release_21_11.rst
> > > > > index a8900a3..f0b3006 100644
> > > > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > > > @@ -191,6 +191,7 @@ API Changes
> > > > >
> > > > > * bbdev: Added capability related to more comprehensive CRC
> > options.
> > > > >
> > > > > +* bbdev: Added device info related to data byte endianness
> > > > > +processing
> > > > > assumption.
> > > >
> > > > It is not clear from the description or the release notes, what the
> > > > application is supposed to do based on the new dev_info field set
> > > > and how the driver determine what value to set?
> > > > Isn't there a standard from the application stand point that the
> > > > input/output data Should be in BE or in LE like in case of IP packets which
> > are always in BE?
> > > > I mean why is it dependent on the PMD which is processing it?
> > > > Whatever application understands, PMD should comply with that and do
> > > > internal Swapping if it does not support it.
> > > > Am I missing something?
> > >
> > > This is really to allow Nipin to add his own NXP la12xx PMD, which
> > > appears to have different assumption on endianness.
> > > All existing processing is done in LE by default by the existing PMDs
> > > and the existing ecosystem.
> > > I cannot comment on why they would want to do that for the la12xx
> > > specifically, I could only speculate but here trying to help to find
> > > the best way for the new PMD to be supported.
> > > So here this suggested change is purely about exposing different
> > > assumption for the PMDs, so that this new PMD can still be supported
> > > under this API even though this is in effect incompatible with existing
> > ecosystem.
> > > In case the application has different assumption that what the PMD
> > > does, then byte swapping would have to be done in the application,
> > > more likely I assume that la12xx has its own ecosystem with different
> > > endianness required for other reasons.
> > > The option you are suggesting would be to put the burden on the PMD
> > > but I doubt there is an actual usecase for that. I assume they assume
> > > different endianness for other specific reason, not necessary to be
> > > compatible with existing ecosystem.
> > > Niping, Hemant, feel free to comment back, from previous discussion I
> > > believe this is what you wanted to do. Unsure of the reason, feel free
> > > to share more details or not.
> >
> > Akhil/Nicolas,
> >
> > As Hemant mentioned on v4 (previously asked by Dave)
> >
> > "---
> > If we go back to the data providing source i.e. FAPI interface, it is
> > implementation specific, as per SCF222.
> >
> > Our customers do use BE data in network and at FAPI interface.
> >
> > In LA12xx, at present, we use u8 Big-endian data for processing to FECA
> > engine. We do see that other drivers in DPDK are using Little Endian *(with
> > u32 data)* but standards is open for both.
> > "---
> >
> > Standard is not specific to endianness and is open for implementation.
> > So it does not makes a reason to have one endianness as default and other
> > managed in the PMD, and the current change seems right.
> >
> > Yes endianness assumption is taken in the test vector input/output data, but
> > this should be acceptable as it does not impact the PMD's and end user
> > applications in general.
>
> I want clarify that this would impact the application in case user wanted to
> switch between 2 such hw accelator.
> Ie. you cannot switch the 2 solutions, they are incompatible except if you
> explicitly do the byteswap in the application (as is done in bbdev-test).
> Not necessarily a problem in case they address 2 different ecosystems but
> capturing the implication to be explicit. Ie each device expose the assumptions
> expected by the application and it is up to the application the bbdev api to
> satisfy the relared assumptions.
Hi Nicolas,
Bbdev-test is one of a test application and it is using test vectors. Consider bbdev
example application, the packets are in the network order (i.e. big-endian) and no
swapping is being done in this application. It is probably assumed that the packets
which are arriving/going out at the network are in the endianness order which
are to be processed by the device.
Similarly in the real world usage/applications, the endianness handling would be done
on the other end (the originator/consumer of the data), which again could be done in
hw accelerators without impacting the real world applications.
Also, as standard is open for both, the current change makes sense and swapping in
any of the PMD does not seems suitable.
Regards,
Nipun
>
> >
> > BTW Nicolos, my name is Nipun :)
>
> My bad!
>
> I am marking this patch as obsolete since you have included it in your serie.
>
>
> >
> > >
> > >
> > > >
> > > > >
> > > > > ABI Changes
> > > > > -----------
> > > > > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > > index 4e2feef..eb2c6c1 100644
> > > > > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > > @@ -1089,6 +1089,7 @@
> > > > > #else
> > > > > dev_info->harq_buffer_size = 0;
> > > > > #endif
> > > > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > > acc100_check_ir(d);
> > > > > }
> > > > >
> > > > > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > > index 6485cc8..c7f15c0 100644
> > > > > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > > @@ -372,6 +372,7 @@
> > > > > dev_info->default_queue_conf = default_queue_conf;
> > > > > dev_info->capabilities = bbdev_capabilities;
> > > > > dev_info->cpu_flag_reqs = NULL;
> > > > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > >
> > > > > /* Calculates number of queues assigned to device */
> > > > > dev_info->max_num_queues = 0;
> > > > > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > > index 350c424..72e213e 100644
> > > > > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> > > > > dev_info->default_queue_conf = default_queue_conf;
> > > > > dev_info->capabilities = bbdev_capabilities;
> > > > > dev_info->cpu_flag_reqs = NULL;
> > > > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > >
> > > > > /* Calculates number of queues assigned to device */
> > > > > dev_info->max_num_queues = 0;
> > > > > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > > index e1db2bf..0cab91a 100644
> > > > > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> > > > > dev_info->capabilities = bbdev_capabilities;
> > > > > dev_info->min_alignment = 64;
> > > > > dev_info->harq_buffer_size = 0;
> > > > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > >
> > > > > rte_bbdev_log_debug("got device info from %u\n", dev->data-
> > > > > >dev_id);
> > > > > }
> > > > > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > > > > 3ebf62e..b3f3000 100644
> > > > > --- a/lib/bbdev/rte_bbdev.h
> > > > > +++ b/lib/bbdev/rte_bbdev.h
> > > > > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> > > > > RTE_BBDEV_INITIALIZED
> > > > > };
> > > > >
> > > > > +/** Definitions of device data byte endianness types */ enum
> > > > > +rte_bbdev_endianness {
> > > > > + RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE
> */
> > > > > + RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness
> LE */ };
> > > > If at all be need this dev_info field, as Tom suggested we should
> > > > use RTE_BIG/LITTLE_ENDIAN.
> > >
> > > See separate comment on my reply to Tom:
> > > I considered this but the usage is different, these are build time
> > > #define, and really would bring confusion here.
> > > Note that there are not really the endianness of the system itself but
> > > specific to the bbdev data output going through signal processing.
> > > I thought it was more explicit and less confusing this way, feel free
> > > to comment back.
> > > NXP would know best why a different endianness would be required in the
> > PMD.
> >
> > Please see previous comment for endianness support.
> > I agree with the RTE_ prefix we can add it as it is for the application interface.
> >
> > >
> > > >
> > > > > +
> > > > > /**
> > > > > * Get the total number of devices that have been successfully
> > initialised.
> > > > > *
> > > > > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> > > > > uint16_t min_alignment;
> > > > > /** HARQ memory available in kB */
> > > > > uint32_t harq_buffer_size;
> > > > > + /** Byte endianness assumption for input/output data */
> > > > > + enum rte_bbdev_endianness data_endianness;
> > > >
> > > > We should define how the input and output data are expected from the
> > app.
> > > > If need be, we can define a simple ``bool swap`` instead of an enum.
> > >
> > > This could be done as well. Default no swap, and swap required for the
> > > new PMD.
> > > I will let Nipin/Hemant comment back.
> >
> > Again endianness is implementation specific and not standard for 5G
> > processing, unlike it is for network packet.
> >
> > Regards,
> > Nipun
> >
> > >
> > > >
> > > > > /** Default queue configuration used if none is supplied */
> > > > > struct rte_bbdev_queue_conf default_queue_conf;
> > > > > /** Device operation capabilities */
> > > > > --
> > > > > 1.8.3.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5 2/2] cmdline: make struct rdline opaque
2021-10-07 22:10 4% ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-07 22:10 4% ` [dpdk-dev] [PATCH v5 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
@ 2021-10-07 22:10 3% ` Dmitry Kozlyuk
1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-07 22:10 UTC (permalink / raw)
To: dev
Cc: Dmitry Kozlyuk, Ali Alnubani, Gregory Etelson, David Marchand,
Olivier Matz, Ray Kinsella
Hide struct rdline definition and some RDLINE_* constants in order
to be able to change internal buffer sizes transparently to the user.
Add new functions:
* rdline_new(): allocate and initialize struct rdline.
This function replaces rdline_init() and takes an extra parameter:
opaque user data for the callbacks.
* rdline_free(): deallocate struct rdline.
* rdline_get_history_buffer_size(): for use in tests.
* rdline_get_opaque(): to obtain user data in callback functions.
Remove rdline_init() function from library headers and export list,
because using it requires the knowledge of sizeof(struct rdline).
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-cmdline/commands.c | 2 +-
app/test/test_cmdline_lib.c | 22 ++++---
doc/guides/rel_notes/release_21_11.rst | 3 +
lib/cmdline/cmdline.c | 3 +-
lib/cmdline/cmdline_private.h | 49 +++++++++++++++
lib/cmdline/cmdline_rdline.c | 43 ++++++++++++-
lib/cmdline/cmdline_rdline.h | 86 ++++++++++----------------
lib/cmdline/version.map | 8 ++-
8 files changed, 147 insertions(+), 69 deletions(-)
diff --git a/app/test-cmdline/commands.c b/app/test-cmdline/commands.c
index d732976f08..a13e1d1afd 100644
--- a/app/test-cmdline/commands.c
+++ b/app/test-cmdline/commands.c
@@ -297,7 +297,7 @@ cmd_get_history_bufsize_parsed(__rte_unused void *parsed_result,
struct rdline *rdl = cmdline_get_rdline(cl);
cmdline_printf(cl, "History buffer size: %zu\n",
- sizeof(rdl->history_buf));
+ rdline_get_history_buffer_size(rdl));
}
cmdline_parse_token_string_t cmd_get_history_bufsize_tok =
diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c
index d5a09b4541..054ebf5e9d 100644
--- a/app/test/test_cmdline_lib.c
+++ b/app/test/test_cmdline_lib.c
@@ -83,18 +83,19 @@ test_cmdline_parse_fns(void)
static int
test_cmdline_rdline_fns(void)
{
- struct rdline rdl;
+ struct rdline *rdl;
rdline_write_char_t *wc = &cmdline_write_char;
rdline_validate_t *v = &valid_buffer;
rdline_complete_t *c = &complete_buffer;
- if (rdline_init(NULL, wc, v, c) >= 0)
+ rdl = rdline_new(NULL, v, c, NULL);
+ if (rdl != NULL)
goto error;
- if (rdline_init(&rdl, NULL, v, c) >= 0)
+ rdl = rdline_new(wc, NULL, c, NULL);
+ if (rdl != NULL)
goto error;
- if (rdline_init(&rdl, wc, NULL, c) >= 0)
- goto error;
- if (rdline_init(&rdl, wc, v, NULL) >= 0)
+ rdl = rdline_new(wc, v, NULL, NULL);
+ if (rdl != NULL)
goto error;
if (rdline_char_in(NULL, 0) >= 0)
goto error;
@@ -102,25 +103,30 @@ test_cmdline_rdline_fns(void)
goto error;
if (rdline_add_history(NULL, "history") >= 0)
goto error;
- if (rdline_add_history(&rdl, NULL) >= 0)
+ if (rdline_add_history(rdl, NULL) >= 0)
goto error;
if (rdline_get_history_item(NULL, 0) != NULL)
goto error;
/* void functions */
+ rdline_get_history_buffer_size(NULL);
+ rdline_get_opaque(NULL);
rdline_newline(NULL, "prompt");
- rdline_newline(&rdl, NULL);
+ rdline_newline(rdl, NULL);
rdline_stop(NULL);
rdline_quit(NULL);
rdline_restart(NULL);
rdline_redisplay(NULL);
rdline_reset(NULL);
rdline_clear_history(NULL);
+ rdline_free(NULL);
+ rdline_free(rdl);
return 0;
error:
printf("Error: function accepted null parameter!\n");
+ rdline_free(rdl);
return -1;
}
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 18377e5813..af11f4a656 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -103,6 +103,9 @@ API Changes
* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+* cmdline: Made ``rdline`` structure definition hidden. Functions are added
+ to dynamically allocate and free it, and to access user data in callbacks.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline.c b/lib/cmdline/cmdline.c
index a176d15130..8f1854cb0b 100644
--- a/lib/cmdline/cmdline.c
+++ b/lib/cmdline/cmdline.c
@@ -85,13 +85,12 @@ cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
cl->ctx = ctx;
ret = rdline_init(&cl->rdl, cmdline_write_char, cmdline_valid_buffer,
- cmdline_complete_buffer);
+ cmdline_complete_buffer, cl);
if (ret != 0) {
free(cl);
return NULL;
}
- cl->rdl.opaque = cl;
cmdline_set_prompt(cl, prompt);
rdline_newline(&cl->rdl, cl->prompt);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index 2e93674c66..c2e906d8de 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -17,6 +17,49 @@
#include <cmdline.h>
+#define RDLINE_BUF_SIZE 512
+#define RDLINE_PROMPT_SIZE 32
+#define RDLINE_VT100_BUF_SIZE 8
+#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
+#define RDLINE_HISTORY_MAX_LINE 64
+
+enum rdline_status {
+ RDLINE_INIT,
+ RDLINE_RUNNING,
+ RDLINE_EXITED
+};
+
+struct rdline {
+ enum rdline_status status;
+ /* rdline bufs */
+ struct cirbuf left;
+ struct cirbuf right;
+ char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
+ char right_buf[RDLINE_BUF_SIZE];
+
+ char prompt[RDLINE_PROMPT_SIZE];
+ unsigned int prompt_size;
+
+ char kill_buf[RDLINE_BUF_SIZE];
+ unsigned int kill_size;
+
+ /* history */
+ struct cirbuf history;
+ char history_buf[RDLINE_HISTORY_BUF_SIZE];
+ int history_cur_line;
+
+ /* callbacks and func pointers */
+ rdline_write_char_t *write_char;
+ rdline_validate_t *validate;
+ rdline_complete_t *complete;
+
+ /* vt100 parser */
+ struct cmdline_vt100 vt100;
+
+ /* opaque pointer */
+ void *opaque;
+};
+
#ifdef RTE_EXEC_ENV_WINDOWS
struct terminal {
DWORD input_mode;
@@ -57,4 +100,10 @@ ssize_t cmdline_read_char(struct cmdline *cl, char *c);
__rte_format_printf(2, 0)
int cmdline_vdprintf(int fd, const char *format, va_list op);
+int rdline_init(struct rdline *rdl,
+ rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque);
+
#endif
diff --git a/lib/cmdline/cmdline_rdline.c b/lib/cmdline/cmdline_rdline.c
index 2cb53e38f2..d92b1cda53 100644
--- a/lib/cmdline/cmdline_rdline.c
+++ b/lib/cmdline/cmdline_rdline.c
@@ -13,6 +13,7 @@
#include <ctype.h>
#include "cmdline_cirbuf.h"
+#include "cmdline_private.h"
#include "cmdline_rdline.h"
static void rdline_puts(struct rdline *rdl, const char *buf);
@@ -37,9 +38,10 @@ isblank2(char c)
int
rdline_init(struct rdline *rdl,
- rdline_write_char_t *write_char,
- rdline_validate_t *validate,
- rdline_complete_t *complete)
+ rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque)
{
if (!rdl || !write_char || !validate || !complete)
return -EINVAL;
@@ -47,10 +49,33 @@ rdline_init(struct rdline *rdl,
rdl->validate = validate;
rdl->complete = complete;
rdl->write_char = write_char;
+ rdl->opaque = opaque;
rdl->status = RDLINE_INIT;
return cirbuf_init(&rdl->history, rdl->history_buf, 0, RDLINE_HISTORY_BUF_SIZE);
}
+struct rdline *
+rdline_new(rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque)
+{
+ struct rdline *rdl;
+
+ rdl = malloc(sizeof(*rdl));
+ if (rdline_init(rdl, write_char, validate, complete, opaque) < 0) {
+ free(rdl);
+ rdl = NULL;
+ }
+ return rdl;
+}
+
+void
+rdline_free(struct rdline *rdl)
+{
+ free(rdl);
+}
+
void
rdline_newline(struct rdline *rdl, const char *prompt)
{
@@ -564,6 +589,18 @@ rdline_get_history_item(struct rdline * rdl, unsigned int idx)
return NULL;
}
+size_t
+rdline_get_history_buffer_size(struct rdline *rdl)
+{
+ return sizeof(rdl->history_buf);
+}
+
+void *
+rdline_get_opaque(struct rdline *rdl)
+{
+ return rdl != NULL ? rdl->opaque : NULL;
+}
+
int
rdline_add_history(struct rdline * rdl, const char * buf)
{
diff --git a/lib/cmdline/cmdline_rdline.h b/lib/cmdline/cmdline_rdline.h
index d2170293de..1b4cc7ce57 100644
--- a/lib/cmdline/cmdline_rdline.h
+++ b/lib/cmdline/cmdline_rdline.h
@@ -10,9 +10,7 @@
/**
* This file is a small equivalent to the GNU readline library, but it
* was originally designed for small systems, like Atmel AVR
- * microcontrollers (8 bits). Indeed, we don't use any malloc that is
- * sometimes not implemented (or just not recommended) on such
- * systems.
+ * microcontrollers (8 bits). It only uses malloc() on object creation.
*
* Obviously, it does not support as many things as the GNU readline,
* but at least it supports some interesting features like a kill
@@ -31,6 +29,7 @@
*/
#include <stdio.h>
+#include <rte_compat.h>
#include <cmdline_cirbuf.h>
#include <cmdline_vt100.h>
@@ -38,19 +37,6 @@
extern "C" {
#endif
-/* configuration */
-#define RDLINE_BUF_SIZE 512
-#define RDLINE_PROMPT_SIZE 32
-#define RDLINE_VT100_BUF_SIZE 8
-#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
-#define RDLINE_HISTORY_MAX_LINE 64
-
-enum rdline_status {
- RDLINE_INIT,
- RDLINE_RUNNING,
- RDLINE_EXITED
-};
-
struct rdline;
typedef int (rdline_write_char_t)(struct rdline *rdl, char);
@@ -60,52 +46,32 @@ typedef int (rdline_complete_t)(struct rdline *rdl, const char *buf,
char *dstbuf, unsigned int dstsize,
int *state);
-struct rdline {
- enum rdline_status status;
- /* rdline bufs */
- struct cirbuf left;
- struct cirbuf right;
- char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
- char right_buf[RDLINE_BUF_SIZE];
-
- char prompt[RDLINE_PROMPT_SIZE];
- unsigned int prompt_size;
-
- char kill_buf[RDLINE_BUF_SIZE];
- unsigned int kill_size;
-
- /* history */
- struct cirbuf history;
- char history_buf[RDLINE_HISTORY_BUF_SIZE];
- int history_cur_line;
-
- /* callbacks and func pointers */
- rdline_write_char_t *write_char;
- rdline_validate_t *validate;
- rdline_complete_t *complete;
-
- /* vt100 parser */
- struct cmdline_vt100 vt100;
-
- /* opaque pointer */
- void *opaque;
-};
-
/**
- * Init fields for a struct rdline. Call this only once at the beginning
- * of your program.
- * \param rdl A pointer to an uninitialized struct rdline
+ * Allocate and initialize a new rdline instance.
+ *
* \param write_char The function used by the function to write a character
* \param validate A pointer to the function to execute when the
* user validates the buffer.
* \param complete A pointer to the function to execute when the
* user completes the buffer.
+ * \param opaque User data for use in the callbacks.
+ *
+ * \return New rdline object on success, NULL on failure.
*/
-int rdline_init(struct rdline *rdl,
- rdline_write_char_t *write_char,
- rdline_validate_t *validate,
- rdline_complete_t *complete);
+__rte_experimental
+struct rdline *rdline_new(rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque);
+/**
+ * Free an rdline instance.
+ *
+ * \param rdl A pointer to an initialized struct rdline.
+ * If NULL, this function is a no-op.
+ */
+__rte_experimental
+void rdline_free(struct rdline *rdl);
/**
* Init the current buffer, and display a prompt.
@@ -194,6 +160,18 @@ void rdline_clear_history(struct rdline *rdl);
*/
char *rdline_get_history_item(struct rdline *rdl, unsigned int i);
+/**
+ * Get maximum history buffer size.
+ */
+__rte_experimental
+size_t rdline_get_history_buffer_size(struct rdline *rdl);
+
+/**
+ * Get the opaque pointer supplied on struct rdline creation.
+ */
+__rte_experimental
+void *rdline_get_opaque(struct rdline *rdl);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/cmdline/version.map b/lib/cmdline/version.map
index 980adb4f23..b9bbb87510 100644
--- a/lib/cmdline/version.map
+++ b/lib/cmdline/version.map
@@ -57,7 +57,6 @@ DPDK_22 {
rdline_clear_history;
rdline_get_buffer;
rdline_get_history_item;
- rdline_init;
rdline_newline;
rdline_quit;
rdline_redisplay;
@@ -73,7 +72,14 @@ DPDK_22 {
EXPERIMENTAL {
global:
+ # added in 20.11
cmdline_get_rdline;
+ # added in 21.11
+ rdline_new;
+ rdline_free;
+ rdline_get_history_buffer_size;
+ rdline_get_opaque;
+
local: *;
};
--
2.29.3
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 1/2] cmdline: make struct cmdline opaque
2021-10-07 22:10 4% ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
@ 2021-10-07 22:10 4% ` Dmitry Kozlyuk
2021-10-07 22:10 3% ` [dpdk-dev] [PATCH v5 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-07 22:10 UTC (permalink / raw)
To: dev; +Cc: Dmitry Kozlyuk, David Marchand, Olivier Matz, Ray Kinsella
Remove the definition of `struct cmdline` from public header.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2020-September/183310.html
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_21_11.rst | 2 ++
lib/cmdline/cmdline.h | 19 -------------------
lib/cmdline/cmdline_private.h | 8 +++++++-
4 files changed, 9 insertions(+), 24 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..a404276fa2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,10 +275,6 @@ Deprecation Notices
* metrics: The function ``rte_metrics_init`` will have a non-void return
in order to notify errors instead of calling ``rte_exit``.
-* cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
- content. On Linux and FreeBSD, supported prior to DPDK 20.11,
- original structure will be kept until DPDK 21.11.
-
* security: The functions ``rte_security_set_pkt_metadata`` and
``rte_security_get_userdata`` will be made inline functions and additional
flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b55900936d..18377e5813 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -101,6 +101,8 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline.h b/lib/cmdline/cmdline.h
index c29762ddae..96674dfda2 100644
--- a/lib/cmdline/cmdline.h
+++ b/lib/cmdline/cmdline.h
@@ -7,10 +7,6 @@
#ifndef _CMDLINE_H_
#define _CMDLINE_H_
-#ifndef RTE_EXEC_ENV_WINDOWS
-#include <termios.h>
-#endif
-
#include <rte_common.h>
#include <rte_compat.h>
@@ -27,23 +23,8 @@
extern "C" {
#endif
-#ifndef RTE_EXEC_ENV_WINDOWS
-
-struct cmdline {
- int s_in;
- int s_out;
- cmdline_parse_ctx_t *ctx;
- struct rdline rdl;
- char prompt[RDLINE_PROMPT_SIZE];
- struct termios oldterm;
-};
-
-#else
-
struct cmdline;
-#endif /* RTE_EXEC_ENV_WINDOWS */
-
struct cmdline *cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out);
void cmdline_set_prompt(struct cmdline *cl, const char *prompt);
void cmdline_free(struct cmdline *cl);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index a87c45275c..2e93674c66 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -11,6 +11,8 @@
#include <rte_os_shim.h>
#ifdef RTE_EXEC_ENV_WINDOWS
#include <rte_windows.h>
+#else
+#include <termios.h>
#endif
#include <cmdline.h>
@@ -22,6 +24,7 @@ struct terminal {
int is_console_input;
int is_console_output;
};
+#endif
struct cmdline {
int s_in;
@@ -29,11 +32,14 @@ struct cmdline {
cmdline_parse_ctx_t *ctx;
struct rdline rdl;
char prompt[RDLINE_PROMPT_SIZE];
+#ifdef RTE_EXEC_ENV_WINDOWS
struct terminal oldterm;
char repeated_char;
WORD repeat_count;
-};
+#else
+ struct termios oldterm;
#endif
+};
/* Disable buffering and echoing, save previous settings to oldterm. */
void terminal_adjust(struct cmdline *cl);
--
2.29.3
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-05 20:15 3% ` [dpdk-dev] [PATCH v4 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
@ 2021-10-07 22:10 4% ` Dmitry Kozlyuk
2021-10-07 22:10 4% ` [dpdk-dev] [PATCH v5 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-07 22:10 3% ` [dpdk-dev] [PATCH v5 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2 siblings, 2 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-07 22:10 UTC (permalink / raw)
To: dev; +Cc: Dmitry Kozlyuk
Hide struct cmdline following the deprecation notice.
Hide struct rdline following the v1 discussion.
v5: fix API documentation (Olivier),
remove useless NULL assignment (Stephen).
v4: rdline_create -> rdline_new, restore empty line (Olivier).
v3: add experimental tags and releae notes for rdline.
v2: also hide struct rdline (David, Olivier).
Dmitry Kozlyuk (2):
cmdline: make struct cmdline opaque
cmdline: make struct rdline opaque
app/test-cmdline/commands.c | 2 +-
app/test/test_cmdline_lib.c | 22 ++++---
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_21_11.rst | 5 ++
lib/cmdline/cmdline.c | 3 +-
lib/cmdline/cmdline.h | 19 ------
lib/cmdline/cmdline_private.h | 57 ++++++++++++++++-
lib/cmdline/cmdline_rdline.c | 43 ++++++++++++-
lib/cmdline/cmdline_rdline.h | 86 ++++++++++----------------
lib/cmdline/version.map | 8 ++-
10 files changed, 156 insertions(+), 93 deletions(-)
--
2.29.3
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 1/2] net: rename Ethernet header fields
@ 2021-10-07 22:07 1% ` Dmitry Kozlyuk
0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-07 22:07 UTC (permalink / raw)
To: dev; +Cc: Dmitry Kozlyuk, Ferruh Yigit, Olivier Matz, Stephen Hemminger
Definition of `rte_ether_addr` structure used a workaround allowing DPDK
and Windows SDK headers to be used in the same file, because Windows SDK
defines `s_addr` as a macro. Rename `s_addr` to `src_addr` and `d_addr`
to `dst_addr` to avoid the conflict and remove the workaround.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2021-July/215270.html
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
app/test-pmd/5tswap.c | 6 +-
app/test-pmd/csumonly.c | 4 +-
app/test-pmd/flowgen.c | 4 +-
app/test-pmd/icmpecho.c | 16 ++---
app/test-pmd/ieee1588fwd.c | 6 +-
app/test-pmd/macfwd.c | 4 +-
app/test-pmd/macswap.h | 6 +-
app/test-pmd/txonly.c | 4 +-
app/test-pmd/util.c | 4 +-
app/test/packet_burst_generator.c | 4 +-
app/test/test_bpf.c | 4 +-
app/test/test_link_bonding_mode4.c | 15 ++--
doc/guides/rel_notes/deprecation.rst | 9 ++-
doc/guides/rel_notes/release_21_11.rst | 3 +
drivers/net/avp/avp_ethdev.c | 6 +-
drivers/net/bnx2x/bnx2x.c | 16 ++---
drivers/net/bonding/rte_eth_bond_8023ad.c | 6 +-
drivers/net/bonding/rte_eth_bond_alb.c | 4 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 20 +++---
drivers/net/enic/enic_flow.c | 8 +--
drivers/net/mlx5/mlx5_txpp.c | 4 +-
examples/bond/main.c | 20 +++---
examples/ethtool/ethtool-app/main.c | 4 +-
examples/eventdev_pipeline/pipeline_common.h | 4 +-
examples/flow_filtering/main.c | 4 +-
examples/ioat/ioatfwd.c | 4 +-
examples/ip_fragmentation/main.c | 4 +-
examples/ip_reassembly/main.c | 4 +-
examples/ipsec-secgw/ipsec-secgw.c | 4 +-
examples/ipsec-secgw/ipsec_worker.c | 4 +-
examples/ipv4_multicast/main.c | 4 +-
examples/l2fwd-crypto/main.c | 4 +-
examples/l2fwd-event/l2fwd_common.h | 4 +-
examples/l2fwd-jobstats/main.c | 4 +-
examples/l2fwd-keepalive/main.c | 4 +-
examples/l2fwd/main.c | 4 +-
examples/l3fwd-acl/main.c | 19 ++---
examples/l3fwd-power/main.c | 8 +--
examples/l3fwd/l3fwd_em.h | 8 +--
examples/l3fwd/l3fwd_fib.c | 4 +-
examples/l3fwd/l3fwd_lpm.c | 4 +-
examples/l3fwd/l3fwd_lpm.h | 8 +--
examples/link_status_interrupt/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 72 +++++++++----------
examples/ptpclient/ptpclient.c | 16 ++---
examples/vhost/main.c | 11 +--
examples/vmdq/main.c | 4 +-
examples/vmdq_dcb/main.c | 4 +-
lib/ethdev/rte_flow.h | 4 +-
lib/gro/gro_tcp4.c | 4 +-
lib/gro/gro_udp4.c | 4 +-
lib/gro/gro_vxlan_tcp4.c | 8 +--
lib/gro/gro_vxlan_udp4.c | 8 +--
lib/net/rte_arp.c | 4 +-
lib/net/rte_ether.h | 22 +-----
lib/pipeline/rte_table_action.c | 40 +++++------
56 files changed, 241 insertions(+), 244 deletions(-)
diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c
index e8cef9623b..629d3e0d31 100644
--- a/app/test-pmd/5tswap.c
+++ b/app/test-pmd/5tswap.c
@@ -27,9 +27,9 @@ swap_mac(struct rte_ether_hdr *eth_hdr)
struct rte_ether_addr addr;
/* Swap dest and src mac addresses. */
- rte_ether_addr_copy(ð_hdr->d_addr, &addr);
- rte_ether_addr_copy(ð_hdr->s_addr, ð_hdr->d_addr);
- rte_ether_addr_copy(&addr, ð_hdr->s_addr);
+ rte_ether_addr_copy(ð_hdr->dst_addr, &addr);
+ rte_ether_addr_copy(ð_hdr->src_addr, ð_hdr->dst_addr);
+ rte_ether_addr_copy(&addr, ð_hdr->src_addr);
}
static inline void
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533..090797318a 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -873,9 +873,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
- ð_hdr->d_addr);
+ ð_hdr->dst_addr);
rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
parse_ethernet(eth_hdr, &info);
l3_hdr = (char *)eth_hdr + info.l2_len;
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a64d..a96169e680 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -122,8 +122,8 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
/* Initialize Ethernet header. */
eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
- rte_ether_addr_copy(&cfg_ether_dst, ð_hdr->d_addr);
- rte_ether_addr_copy(&cfg_ether_src, ð_hdr->s_addr);
+ rte_ether_addr_copy(&cfg_ether_dst, ð_hdr->dst_addr);
+ rte_ether_addr_copy(&cfg_ether_src, ð_hdr->src_addr);
eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
/* Initialize IP header. */
diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c
index 8948f28eb5..8f1d68a83a 100644
--- a/app/test-pmd/icmpecho.c
+++ b/app/test-pmd/icmpecho.c
@@ -319,8 +319,8 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
if (verbose_level > 0) {
printf("\nPort %d pkt-len=%u nb-segs=%u\n",
fs->rx_port, pkt->pkt_len, pkt->nb_segs);
- ether_addr_dump(" ETH: src=", ð_h->s_addr);
- ether_addr_dump(" dst=", ð_h->d_addr);
+ ether_addr_dump(" ETH: src=", ð_h->src_addr);
+ ether_addr_dump(" dst=", ð_h->dst_addr);
}
if (eth_type == RTE_ETHER_TYPE_VLAN) {
vlan_h = (struct rte_vlan_hdr *)
@@ -385,17 +385,17 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
*/
/* Use source MAC address as destination MAC address. */
- rte_ether_addr_copy(ð_h->s_addr, ð_h->d_addr);
+ rte_ether_addr_copy(ð_h->src_addr, ð_h->dst_addr);
/* Set source MAC address with MAC address of TX port */
rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
- ð_h->s_addr);
+ ð_h->src_addr);
arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
rte_ether_addr_copy(&arp_h->arp_data.arp_tha,
ð_addr);
rte_ether_addr_copy(&arp_h->arp_data.arp_sha,
&arp_h->arp_data.arp_tha);
- rte_ether_addr_copy(ð_h->s_addr,
+ rte_ether_addr_copy(ð_h->src_addr,
&arp_h->arp_data.arp_sha);
/* Swap IP addresses in ARP payload */
@@ -453,9 +453,9 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
* ICMP checksum is computed by assuming it is valid in the
* echo request and not verified.
*/
- rte_ether_addr_copy(ð_h->s_addr, ð_addr);
- rte_ether_addr_copy(ð_h->d_addr, ð_h->s_addr);
- rte_ether_addr_copy(ð_addr, ð_h->d_addr);
+ rte_ether_addr_copy(ð_h->src_addr, ð_addr);
+ rte_ether_addr_copy(ð_h->dst_addr, ð_h->src_addr);
+ rte_ether_addr_copy(ð_addr, ð_h->dst_addr);
ip_addr = ip_h->src_addr;
if (is_multicast_ipv4_addr(ip_h->dst_addr)) {
uint32_t ip_src;
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238c34..9cf10c1c50 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -178,9 +178,9 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
port_ieee1588_rx_timestamp_check(fs->rx_port, timesync_index);
/* Swap dest and src mac addresses. */
- rte_ether_addr_copy(ð_hdr->d_addr, &addr);
- rte_ether_addr_copy(ð_hdr->s_addr, ð_hdr->d_addr);
- rte_ether_addr_copy(&addr, ð_hdr->s_addr);
+ rte_ether_addr_copy(ð_hdr->dst_addr, &addr);
+ rte_ether_addr_copy(ð_hdr->src_addr, ð_hdr->dst_addr);
+ rte_ether_addr_copy(&addr, ð_hdr->src_addr);
/* Forward PTP packet with hardware TX timestamp */
mb->ol_flags |= PKT_TX_IEEE1588_TMST;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d..ee76df7f03 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -85,9 +85,9 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
- ð_hdr->d_addr);
+ ð_hdr->dst_addr);
rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
mb->ol_flags |= ol_flags;
mb->l2_len = sizeof(struct rte_ether_hdr);
diff --git a/app/test-pmd/macswap.h b/app/test-pmd/macswap.h
index 0138441566..29c252bb8f 100644
--- a/app/test-pmd/macswap.h
+++ b/app/test-pmd/macswap.h
@@ -29,9 +29,9 @@ do_macswap(struct rte_mbuf *pkts[], uint16_t nb,
eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
/* Swap dest and src mac addresses. */
- rte_ether_addr_copy(ð_hdr->d_addr, &addr);
- rte_ether_addr_copy(ð_hdr->s_addr, ð_hdr->d_addr);
- rte_ether_addr_copy(&addr, ð_hdr->s_addr);
+ rte_ether_addr_copy(ð_hdr->dst_addr, &addr);
+ rte_ether_addr_copy(ð_hdr->src_addr, ð_hdr->dst_addr);
+ rte_ether_addr_copy(&addr, ð_hdr->src_addr);
mbuf_field_set(mb, ol_flags);
}
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d3..40655801cc 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -362,8 +362,8 @@ pkt_burst_transmit(struct fwd_stream *fs)
/*
* Initialize Ethernet header.
*/
- rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr], ð_hdr.d_addr);
- rte_ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr.s_addr);
+ rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr], ð_hdr.dst_addr);
+ rte_ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr.src_addr);
eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
if (rte_mempool_get_bulk(mbp, (void **)pkts_burst,
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 14a9a251fb..51506e4940 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -142,9 +142,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
" - no miss group");
MKDUMPSTR(print_buf, buf_size, cur_len, "\n");
}
- print_ether_addr(" src=", ð_hdr->s_addr,
+ print_ether_addr(" src=", ð_hdr->src_addr,
print_buf, buf_size, &cur_len);
- print_ether_addr(" - dst=", ð_hdr->d_addr,
+ print_ether_addr(" - dst=", ð_hdr->dst_addr,
print_buf, buf_size, &cur_len);
MKDUMPSTR(print_buf, buf_size, cur_len,
" - type=0x%04x - length=%u - nb_segs=%d",
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index 0fd7290b0e..8ac24577ba 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -56,8 +56,8 @@ initialize_eth_header(struct rte_ether_hdr *eth_hdr,
struct rte_ether_addr *dst_mac, uint16_t ether_type,
uint8_t vlan_enabled, uint16_t van_id)
{
- rte_ether_addr_copy(dst_mac, ð_hdr->d_addr);
- rte_ether_addr_copy(src_mac, ð_hdr->s_addr);
+ rte_ether_addr_copy(dst_mac, ð_hdr->dst_addr);
+ rte_ether_addr_copy(src_mac, ð_hdr->src_addr);
if (vlan_enabled) {
struct rte_vlan_hdr *vhdr = (struct rte_vlan_hdr *)(
diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c
index 527c06b807..8118a1849b 100644
--- a/app/test/test_bpf.c
+++ b/app/test/test_bpf.c
@@ -1008,9 +1008,9 @@ test_jump2_prepare(void *arg)
* Initialize ether header.
*/
rte_ether_addr_copy((struct rte_ether_addr *)dst_mac,
- &dn->eth_hdr.d_addr);
+ &dn->eth_hdr.dst_addr);
rte_ether_addr_copy((struct rte_ether_addr *)src_mac,
- &dn->eth_hdr.s_addr);
+ &dn->eth_hdr.src_addr);
dn->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
/*
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7ad..f120b2e3be 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -502,8 +502,8 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
slow_hdr = rte_pktmbuf_mtod(pkt, struct slow_protocol_frame *);
/* Change source address to partner address */
- rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.s_addr);
- slow_hdr->eth_hdr.s_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+ rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
+ slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
slave->port_id;
lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
@@ -870,7 +870,7 @@ test_mode4_rx(void)
for (i = 0; i < expected_pkts_cnt; i++) {
hdr = rte_pktmbuf_mtod(pkts[i], struct rte_ether_hdr *);
- cnt[rte_is_same_ether_addr(&hdr->d_addr,
+ cnt[rte_is_same_ether_addr(&hdr->dst_addr,
&bonded_mac)]++;
}
@@ -918,7 +918,7 @@ test_mode4_rx(void)
for (i = 0; i < expected_pkts_cnt; i++) {
hdr = rte_pktmbuf_mtod(pkts[i], struct rte_ether_hdr *);
- eq_cnt += rte_is_same_ether_addr(&hdr->d_addr,
+ eq_cnt += rte_is_same_ether_addr(&hdr->dst_addr,
&bonded_mac);
}
@@ -1163,11 +1163,12 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
/* Copy multicast destination address */
rte_ether_addr_copy(&slow_protocol_mac_addr,
- &marker_hdr->eth_hdr.d_addr);
+ &marker_hdr->eth_hdr.dst_addr);
/* Init source address */
- rte_ether_addr_copy(&parnter_mac_default, &marker_hdr->eth_hdr.s_addr);
- marker_hdr->eth_hdr.s_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+ rte_ether_addr_copy(&parnter_mac_default,
+ &marker_hdr->eth_hdr.src_addr);
+ marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
slave->port_id;
marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b..918ea3f403 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -164,8 +164,13 @@ Deprecation Notices
consistent with existing outer header checksum status flag naming, which
should help in reducing confusion about its usage.
-* net: ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
- will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets headers.
+* i40e: As there are both i40evf and iavf pmd, the functions of them are
+ duplicated. And now more and more advanced features are developed on iavf.
+ To keep consistent with kernel driver's name
+ (https://patchwork.ozlabs.org/patch/970154/), i40evf is no need to maintain.
+ Starting from 21.05, the default VF driver of i40e will be iavf, but i40evf
+ can still be used if users specify the devarg "driver=i40evf". I40evf will
+ be deleted in DPDK 21.11.
* net: The structure ``rte_ipv4_hdr`` will have two unions.
The first union is for existing ``version_ihl`` byte
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index efeffe37a0..907e45c4e7 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -191,6 +191,9 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
+ to ``src_addr`` and ``dst_addr``, respectively.
+
ABI Changes
-----------
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..b5fafd32b0 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1205,17 +1205,17 @@ _avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
{
struct rte_ether_hdr *eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
- if (likely(_avp_cmp_ether_addr(&avp->ethaddr, ð->d_addr) == 0)) {
+ if (likely(_avp_cmp_ether_addr(&avp->ethaddr, ð->dst_addr) == 0)) {
/* allow all packets destined to our address */
return 0;
}
- if (likely(rte_is_broadcast_ether_addr(ð->d_addr))) {
+ if (likely(rte_is_broadcast_ether_addr(ð->dst_addr))) {
/* allow all broadcast packets */
return 0;
}
- if (likely(rte_is_multicast_ether_addr(ð->d_addr))) {
+ if (likely(rte_is_multicast_ether_addr(ð->dst_addr))) {
/* allow all multicast packets */
return 0;
}
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 9163b8b1fd..083deff1b1 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2233,8 +2233,8 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
tx_parse_bd =
&txq->tx_ring[TX_BD(bd_prod, txq)].parse_bd_e2;
- if (rte_is_multicast_ether_addr(&eh->d_addr)) {
- if (rte_is_broadcast_ether_addr(&eh->d_addr))
+ if (rte_is_multicast_ether_addr(&eh->dst_addr)) {
+ if (rte_is_broadcast_ether_addr(&eh->dst_addr))
mac_type = BROADCAST_ADDRESS;
else
mac_type = MULTICAST_ADDRESS;
@@ -2243,17 +2243,17 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
(mac_type << ETH_TX_PARSE_BD_E2_ETH_ADDR_TYPE_SHIFT);
rte_memcpy(&tx_parse_bd->data.mac_addr.dst_hi,
- &eh->d_addr.addr_bytes[0], 2);
+ &eh->dst_addr.addr_bytes[0], 2);
rte_memcpy(&tx_parse_bd->data.mac_addr.dst_mid,
- &eh->d_addr.addr_bytes[2], 2);
+ &eh->dst_addr.addr_bytes[2], 2);
rte_memcpy(&tx_parse_bd->data.mac_addr.dst_lo,
- &eh->d_addr.addr_bytes[4], 2);
+ &eh->dst_addr.addr_bytes[4], 2);
rte_memcpy(&tx_parse_bd->data.mac_addr.src_hi,
- &eh->s_addr.addr_bytes[0], 2);
+ &eh->src_addr.addr_bytes[0], 2);
rte_memcpy(&tx_parse_bd->data.mac_addr.src_mid,
- &eh->s_addr.addr_bytes[2], 2);
+ &eh->src_addr.addr_bytes[2], 2);
rte_memcpy(&tx_parse_bd->data.mac_addr.src_lo,
- &eh->s_addr.addr_bytes[4], 2);
+ &eh->src_addr.addr_bytes[4], 2);
tx_parse_bd->data.mac_addr.dst_hi =
rte_cpu_to_be_16(tx_parse_bd->data.mac_addr.dst_hi);
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 8b5b32fcaf..3558644232 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -587,8 +587,8 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
hdr = rte_pktmbuf_mtod(lacp_pkt, struct lacpdu_header *);
/* Source and destination MAC */
- rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.d_addr);
- rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.s_addr);
+ rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
+ rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
lacpdu = &hdr->lacpdu;
@@ -1346,7 +1346,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
} while (unlikely(retval == 0));
m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
- rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.s_addr);
+ rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
if (internals->mode4.dedicated_queues.enabled == 0) {
if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 1d36a4a4a2..86335a7971 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -213,8 +213,8 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
rte_spinlock_lock(&internals->mode6.lock);
eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
- rte_ether_addr_copy(&client_info->app_mac, ð_h->s_addr);
- rte_ether_addr_copy(&client_info->cli_mac, ð_h->d_addr);
+ rte_ether_addr_copy(&client_info->app_mac, ð_h->src_addr);
+ rte_ether_addr_copy(&client_info->cli_mac, ð_h->dst_addr);
if (client_info->vlan_count > 0)
eth_h->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
else
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b3..6831fcb104 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -342,11 +342,11 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
bufs[j])) ||
!collecting ||
(!promisc &&
- ((rte_is_unicast_ether_addr(&hdr->d_addr) &&
+ ((rte_is_unicast_ether_addr(&hdr->dst_addr) &&
!rte_is_same_ether_addr(bond_mac,
- &hdr->d_addr)) ||
+ &hdr->dst_addr)) ||
(!allmulti &&
- rte_is_multicast_ether_addr(&hdr->d_addr)))))) {
+ rte_is_multicast_ether_addr(&hdr->dst_addr)))))) {
if (hdr->ether_type == ether_type_slow_be) {
bond_mode_8023ad_handle_slow_pkt(
@@ -477,9 +477,9 @@ update_client_stats(uint32_t addr, uint16_t port, uint32_t *TXorRXindicator)
"DstMAC:" RTE_ETHER_ADDR_PRT_FMT " DstIP:%s %s %d\n", \
info, \
port, \
- RTE_ETHER_ADDR_BYTES(ð_h->s_addr), \
+ RTE_ETHER_ADDR_BYTES(ð_h->src_addr), \
src_ip, \
- RTE_ETHER_ADDR_BYTES(ð_h->d_addr), \
+ RTE_ETHER_ADDR_BYTES(ð_h->dst_addr), \
dst_ip, \
arp_op, ++burstnumber)
#endif
@@ -643,9 +643,9 @@ static inline uint16_t
ether_hash(struct rte_ether_hdr *eth_hdr)
{
unaligned_uint16_t *word_src_addr =
- (unaligned_uint16_t *)eth_hdr->s_addr.addr_bytes;
+ (unaligned_uint16_t *)eth_hdr->src_addr.addr_bytes;
unaligned_uint16_t *word_dst_addr =
- (unaligned_uint16_t *)eth_hdr->d_addr.addr_bytes;
+ (unaligned_uint16_t *)eth_hdr->dst_addr.addr_bytes;
return (word_src_addr[0] ^ word_dst_addr[0]) ^
(word_src_addr[1] ^ word_dst_addr[1]) ^
@@ -942,10 +942,10 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ether_hdr = rte_pktmbuf_mtod(bufs[j],
struct rte_ether_hdr *);
- if (rte_is_same_ether_addr(ðer_hdr->s_addr,
+ if (rte_is_same_ether_addr(ðer_hdr->src_addr,
&primary_slave_addr))
rte_ether_addr_copy(&active_slave_addr,
- ðer_hdr->s_addr);
+ ðer_hdr->src_addr);
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
#endif
@@ -1017,7 +1017,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
/* Change src mac in eth header */
- rte_eth_macaddr_get(slave_idx, ð_h->s_addr);
+ rte_eth_macaddr_get(slave_idx, ð_h->src_addr);
/* Add packet to slave tx buffer */
slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cdfdc904a6..33147169ba 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,14 +656,14 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
if (!mask)
mask = &rte_flow_item_eth_mask;
- memcpy(enic_spec.d_addr.addr_bytes, spec->dst.addr_bytes,
+ memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
RTE_ETHER_ADDR_LEN);
- memcpy(enic_spec.s_addr.addr_bytes, spec->src.addr_bytes,
+ memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
RTE_ETHER_ADDR_LEN);
- memcpy(enic_mask.d_addr.addr_bytes, mask->dst.addr_bytes,
+ memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
RTE_ETHER_ADDR_LEN);
- memcpy(enic_mask.s_addr.addr_bytes, mask->src.addr_bytes,
+ memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
RTE_ETHER_ADDR_LEN);
enic_spec.ether_type = spec->type;
enic_mask.ether_type = mask->type;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 4f6da9f2d1..2be7e71f89 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -333,8 +333,8 @@ mlx5_txpp_fill_wqe_clock_queue(struct mlx5_dev_ctx_shared *sh)
/* Build test packet L2 header (Ethernet). */
dst = (uint8_t *)&es->inline_data;
eth_hdr = (struct rte_ether_hdr *)dst;
- rte_eth_random_addr(ð_hdr->d_addr.addr_bytes[0]);
- rte_eth_random_addr(ð_hdr->s_addr.addr_bytes[0]);
+ rte_eth_random_addr(ð_hdr->dst_addr.addr_bytes[0]);
+ rte_eth_random_addr(ð_hdr->src_addr.addr_bytes[0]);
eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
/* Build test packet L3 header (IP v4). */
dst += sizeof(struct rte_ether_hdr);
diff --git a/examples/bond/main.c b/examples/bond/main.c
index a63ca70a7f..7adaa93cad 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -358,7 +358,7 @@ struct global_flag_stru_t *global_flag_stru_p = &global_flag_stru;
static int lcore_main(__rte_unused void *arg1)
{
struct rte_mbuf *pkts[MAX_PKT_BURST] __rte_cache_aligned;
- struct rte_ether_addr d_addr;
+ struct rte_ether_addr dst_addr;
struct rte_ether_addr bond_mac_addr;
struct rte_ether_hdr *eth_hdr;
@@ -422,13 +422,13 @@ static int lcore_main(__rte_unused void *arg1)
if (arp_hdr->arp_opcode == rte_cpu_to_be_16(RTE_ARP_OP_REQUEST)) {
arp_hdr->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
/* Switch src and dst data and set bonding MAC */
- rte_ether_addr_copy(ð_hdr->s_addr, ð_hdr->d_addr);
- rte_ether_addr_copy(&bond_mac_addr, ð_hdr->s_addr);
+ rte_ether_addr_copy(ð_hdr->src_addr, ð_hdr->dst_addr);
+ rte_ether_addr_copy(&bond_mac_addr, ð_hdr->src_addr);
rte_ether_addr_copy(&arp_hdr->arp_data.arp_sha,
&arp_hdr->arp_data.arp_tha);
arp_hdr->arp_data.arp_tip = arp_hdr->arp_data.arp_sip;
- rte_ether_addr_copy(&bond_mac_addr, &d_addr);
- rte_ether_addr_copy(&d_addr, &arp_hdr->arp_data.arp_sha);
+ rte_ether_addr_copy(&bond_mac_addr, &dst_addr);
+ rte_ether_addr_copy(&dst_addr, &arp_hdr->arp_data.arp_sha);
arp_hdr->arp_data.arp_sip = bond_ip;
rte_eth_tx_burst(BOND_PORT, 0, &pkts[i], 1);
is_free = 1;
@@ -443,8 +443,10 @@ static int lcore_main(__rte_unused void *arg1)
}
ipv4_hdr = (struct rte_ipv4_hdr *)((char *)(eth_hdr + 1) + offset);
if (ipv4_hdr->dst_addr == bond_ip) {
- rte_ether_addr_copy(ð_hdr->s_addr, ð_hdr->d_addr);
- rte_ether_addr_copy(&bond_mac_addr, ð_hdr->s_addr);
+ rte_ether_addr_copy(ð_hdr->src_addr,
+ ð_hdr->dst_addr);
+ rte_ether_addr_copy(&bond_mac_addr,
+ ð_hdr->src_addr);
ipv4_hdr->dst_addr = ipv4_hdr->src_addr;
ipv4_hdr->src_addr = bond_ip;
rte_eth_tx_burst(BOND_PORT, 0, &pkts[i], 1);
@@ -519,8 +521,8 @@ static void cmd_obj_send_parsed(void *parsed_result,
created_pkt->pkt_len = pkt_size;
eth_hdr = rte_pktmbuf_mtod(created_pkt, struct rte_ether_hdr *);
- rte_ether_addr_copy(&bond_mac_addr, ð_hdr->s_addr);
- memset(ð_hdr->d_addr, 0xFF, RTE_ETHER_ADDR_LEN);
+ rte_ether_addr_copy(&bond_mac_addr, ð_hdr->src_addr);
+ memset(ð_hdr->dst_addr, 0xFF, RTE_ETHER_ADDR_LEN);
eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP);
arp_hdr = (struct rte_arp_hdr *)(
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 21ed85c7d6..1bc675962b 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -172,8 +172,8 @@ static void process_frame(struct app_port *ptr_port,
struct rte_ether_hdr *ptr_mac_hdr;
ptr_mac_hdr = rte_pktmbuf_mtod(ptr_frame, struct rte_ether_hdr *);
- rte_ether_addr_copy(&ptr_mac_hdr->s_addr, &ptr_mac_hdr->d_addr);
- rte_ether_addr_copy(&ptr_port->mac_addr, &ptr_mac_hdr->s_addr);
+ rte_ether_addr_copy(&ptr_mac_hdr->src_addr, &ptr_mac_hdr->dst_addr);
+ rte_ether_addr_copy(&ptr_port->mac_addr, &ptr_mac_hdr->src_addr);
}
static int worker_main(__rte_unused void *ptr_data)
diff --git a/examples/eventdev_pipeline/pipeline_common.h b/examples/eventdev_pipeline/pipeline_common.h
index 6a4287602e..b12eb281e1 100644
--- a/examples/eventdev_pipeline/pipeline_common.h
+++ b/examples/eventdev_pipeline/pipeline_common.h
@@ -104,8 +104,8 @@ exchange_mac(struct rte_mbuf *m)
/* change mac addresses on packet (to use mbuf data) */
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
- rte_ether_addr_copy(ð->d_addr, &addr);
- rte_ether_addr_copy(&addr, ð->d_addr);
+ rte_ether_addr_copy(ð->dst_addr, &addr);
+ rte_ether_addr_copy(&addr, ð->dst_addr);
}
static __rte_always_inline void
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index 29fb4b3d55..dd8a33d036 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -75,9 +75,9 @@ main_loop(void)
eth_hdr = rte_pktmbuf_mtod(m,
struct rte_ether_hdr *);
print_ether_addr("src=",
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
print_ether_addr(" - dst=",
- ð_hdr->d_addr);
+ ð_hdr->dst_addr);
printf(" - queue=0x%x",
(unsigned int)i);
printf("\n");
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index b3977a8be5..ff36aa7f1e 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -322,11 +322,11 @@ update_mac_addrs(struct rte_mbuf *m, uint32_t dest_portid)
/* 02:00:00:00:00:xx - overwriting 2 bytes of source address but
* it's acceptable cause it gets overwritten by rte_ether_addr_copy
*/
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
/* src addr */
- rte_ether_addr_copy(&ioat_ports_eth_addr[dest_portid], ð->s_addr);
+ rte_ether_addr_copy(&ioat_ports_eth_addr[dest_portid], ð->src_addr);
}
/* Perform packet copy there is a user-defined function. 8< */
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f245369720..a7f40970f2 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -362,13 +362,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
m->l2_len = sizeof(struct rte_ether_hdr);
/* 02:00:00:00:00:xx */
- d_addr_bytes = ð_hdr->d_addr.addr_bytes[0];
+ d_addr_bytes = ð_hdr->dst_addr.addr_bytes[0];
*((uint64_t *)d_addr_bytes) = 0x000000000002 +
((uint64_t)port_out << 40);
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[port_out],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
eth_hdr->ether_type = ether_type;
}
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790b..d611c7d016 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -413,11 +413,11 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
/* if packet wasn't IPv4 or IPv6, it's forwarded to the port it came from */
/* 02:00:00:00:00:xx */
- d_addr_bytes = ð_hdr->d_addr.addr_bytes[0];
+ d_addr_bytes = ð_hdr->dst_addr.addr_bytes[0];
*((uint64_t *)d_addr_bytes) = 0x000000000002 + ((uint64_t)dst_port << 40);
/* src addr */
- rte_ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->src_addr);
send_single_packet(m, dst_port);
}
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb822..7b01872c6f 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -545,9 +545,9 @@ prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port,
ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
}
- memcpy(ðhdr->s_addr, ðaddr_tbl[port].src,
+ memcpy(ðhdr->src_addr, ðaddr_tbl[port].src,
sizeof(struct rte_ether_addr));
- memcpy(ðhdr->d_addr, ðaddr_tbl[port].dst,
+ memcpy(ðhdr->dst_addr, ðaddr_tbl[port].dst,
sizeof(struct rte_ether_addr));
}
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index c545497cee..61cf9f57fb 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -49,8 +49,8 @@ update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid)
struct rte_ether_hdr *ethhdr;
ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
- memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN);
- memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN);
+ memcpy(ðhdr->src_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN);
+ memcpy(ðhdr->dst_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN);
}
static inline void
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b..d10de30ddb 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -283,8 +283,8 @@ mcast_send_pkt(struct rte_mbuf *pkt, struct rte_ether_addr *dest_addr,
rte_pktmbuf_prepend(pkt, (uint16_t)sizeof(*ethdr));
RTE_ASSERT(ethdr != NULL);
- rte_ether_addr_copy(dest_addr, ðdr->d_addr);
- rte_ether_addr_copy(&ports_eth_addr[port], ðdr->s_addr);
+ rte_ether_addr_copy(dest_addr, ðdr->dst_addr);
+ rte_ether_addr_copy(&ports_eth_addr[port], ðdr->src_addr);
ethdr->ether_type = rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
/* Put new packet into the output queue */
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf7..c2ffbdd506 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -617,11 +617,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, uint16_t dest_portid)
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
/* 02:00:00:00:00:xx */
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
/* src addr */
- rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], ð->s_addr);
+ rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], ð->src_addr);
}
static void
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index 939221d45a..cecbd9b70e 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -92,11 +92,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_port_id,
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
/* 02:00:00:00:00:xx */
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_port_id << 40);
/* src addr */
- rte_ether_addr_copy(addr, ð->s_addr);
+ rte_ether_addr_copy(addr, ð->src_addr);
}
static __rte_always_inline struct l2fwd_resources *
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index afe7fe6ead..06280321b1 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -351,11 +351,11 @@ l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
/* 02:00:00:00:00:xx */
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
/* src addr */
- rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], ð->s_addr);
+ rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], ð->src_addr);
buffer = tx_buffer[dst_port];
sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index d0d979f5ba..07271affb4 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -177,11 +177,11 @@ l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
/* 02:00:00:00:00:xx */
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
/* src addr */
- rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], ð->s_addr);
+ rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], ð->src_addr);
buffer = tx_buffer[dst_port];
sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 05532551a5..f3deeba0a6 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -170,11 +170,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, unsigned dest_portid)
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
/* 02:00:00:00:00:xx */
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
/* src addr */
- rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], ð->s_addr);
+ rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], ð->src_addr);
}
/* Simple forward. 8< */
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564..60545f3059 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -1375,7 +1375,8 @@ send_single_packet(struct rte_mbuf *m, uint16_t port)
/* update src and dst mac*/
eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
- memcpy(eh, &port_l2hdr[port], sizeof(eh->d_addr) + sizeof(eh->s_addr));
+ memcpy(eh, &port_l2hdr[port],
+ sizeof(eh->dst_addr) + sizeof(eh->src_addr));
qconf = &lcore_conf[lcore_id];
rte_eth_tx_buffer(port, qconf->tx_queue_id[port],
@@ -1743,8 +1744,9 @@ parse_eth_dest(const char *optarg)
return "port value exceeds RTE_MAX_ETHPORTS("
RTE_STR(RTE_MAX_ETHPORTS) ")";
- if (cmdline_parse_etheraddr(NULL, port_end, &port_l2hdr[portid].d_addr,
- sizeof(port_l2hdr[portid].d_addr)) < 0)
+ if (cmdline_parse_etheraddr(NULL, port_end,
+ &port_l2hdr[portid].dst_addr,
+ sizeof(port_l2hdr[portid].dst_addr)) < 0)
return "Invalid ethernet address";
return NULL;
}
@@ -2002,8 +2004,9 @@ set_default_dest_mac(void)
uint32_t i;
for (i = 0; i != RTE_DIM(port_l2hdr); i++) {
- port_l2hdr[i].d_addr.addr_bytes[0] = RTE_ETHER_LOCAL_ADMIN_ADDR;
- port_l2hdr[i].d_addr.addr_bytes[5] = i;
+ port_l2hdr[i].dst_addr.addr_bytes[0] =
+ RTE_ETHER_LOCAL_ADMIN_ADDR;
+ port_l2hdr[i].dst_addr.addr_bytes[5] = i;
}
}
@@ -2109,14 +2112,14 @@ main(int argc, char **argv)
"rte_eth_dev_adjust_nb_rx_tx_desc: err=%d, port=%d\n",
ret, portid);
- ret = rte_eth_macaddr_get(portid, &port_l2hdr[portid].s_addr);
+ ret = rte_eth_macaddr_get(portid, &port_l2hdr[portid].src_addr);
if (ret < 0)
rte_exit(EXIT_FAILURE,
"rte_eth_macaddr_get: err=%d, port=%d\n",
ret, portid);
- print_ethaddr("Dst MAC:", &port_l2hdr[portid].d_addr);
- print_ethaddr(", Src MAC:", &port_l2hdr[portid].s_addr);
+ print_ethaddr("Dst MAC:", &port_l2hdr[portid].dst_addr);
+ print_ethaddr(", Src MAC:", &port_l2hdr[portid].src_addr);
printf(", ");
/* init memory */
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44a..73a3ab5bc0 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -717,7 +717,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
dst_port = portid;
/* 02:00:00:00:00:xx */
- d_addr_bytes = ð_hdr->d_addr.addr_bytes[0];
+ d_addr_bytes = ð_hdr->dst_addr.addr_bytes[0];
*((uint64_t *)d_addr_bytes) =
0x000000000002 + ((uint64_t)dst_port << 40);
@@ -729,7 +729,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[dst_port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
send_single_packet(m, dst_port);
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -749,13 +749,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
dst_port = portid;
/* 02:00:00:00:00:xx */
- d_addr_bytes = ð_hdr->d_addr.addr_bytes[0];
+ d_addr_bytes = ð_hdr->dst_addr.addr_bytes[0];
*((uint64_t *)d_addr_bytes) =
0x000000000002 + ((uint64_t)dst_port << 40);
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[dst_port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
send_single_packet(m, dst_port);
#else
diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h
index b992a21da4..e67f5f328c 100644
--- a/examples/l3fwd/l3fwd_em.h
+++ b/examples/l3fwd/l3fwd_em.h
@@ -36,11 +36,11 @@ l3fwd_em_handle_ipv4(struct rte_mbuf *m, uint16_t portid,
++(ipv4_hdr->hdr_checksum);
#endif
/* dst addr */
- *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port];
+ *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[dst_port];
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[dst_port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
return dst_port;
}
@@ -64,11 +64,11 @@ l3fwd_em_handle_ipv6(struct rte_mbuf *m, uint16_t portid,
dst_port = portid;
/* dst addr */
- *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port];
+ *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[dst_port];
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[dst_port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
return dst_port;
}
diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c
index f8d6a3ac39..7fd7c1dd64 100644
--- a/examples/l3fwd/l3fwd_fib.c
+++ b/examples/l3fwd/l3fwd_fib.c
@@ -92,9 +92,9 @@ fib_send_single(int nb_tx, struct lcore_conf *qconf,
/* Set MAC addresses. */
eth_hdr = rte_pktmbuf_mtod(pkts_burst[j],
struct rte_ether_hdr *);
- *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[hops[j]];
+ *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[hops[j]];
rte_ether_addr_copy(&ports_eth_addr[hops[j]],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
/* Send single packet. */
send_single_packet(qconf, pkts_burst[j], hops[j]);
diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c
index 7200160164..232b606b54 100644
--- a/examples/l3fwd/l3fwd_lpm.c
+++ b/examples/l3fwd/l3fwd_lpm.c
@@ -256,11 +256,11 @@ lpm_process_event_pkt(const struct lcore_conf *lconf, struct rte_mbuf *mbuf)
}
#endif
/* dst addr */
- *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[mbuf->port];
+ *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[mbuf->port];
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
#endif
return mbuf->port;
}
diff --git a/examples/l3fwd/l3fwd_lpm.h b/examples/l3fwd/l3fwd_lpm.h
index d730d72a20..c61b969584 100644
--- a/examples/l3fwd/l3fwd_lpm.h
+++ b/examples/l3fwd/l3fwd_lpm.h
@@ -40,11 +40,11 @@ l3fwd_lpm_simple_forward(struct rte_mbuf *m, uint16_t portid,
++(ipv4_hdr->hdr_checksum);
#endif
/* dst addr */
- *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port];
+ *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[dst_port];
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[dst_port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
send_single_packet(qconf, m, dst_port);
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -62,11 +62,11 @@ l3fwd_lpm_simple_forward(struct rte_mbuf *m, uint16_t portid,
dst_port = portid;
/* dst addr */
- *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port];
+ *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[dst_port];
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[dst_port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
send_single_packet(qconf, m, dst_port);
} else {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index a0bc1e56d0..e4542df11f 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -182,11 +182,11 @@ lsi_simple_forward(struct rte_mbuf *m, unsigned portid)
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
/* 02:00:00:00:00:xx */
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
/* src addr */
- rte_ether_addr_copy(&lsi_ports_eth_addr[dst_port], ð->s_addr);
+ rte_ether_addr_copy(&lsi_ports_eth_addr[dst_port], ð->src_addr);
buffer = tx_buffer[dst_port];
sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf26..2905199743 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -1068,24 +1068,24 @@ simple_ipv4_fwd_8pkts(struct rte_mbuf *m[8], uint16_t portid)
#endif
/* dst addr */
- *(uint64_t *)ð_hdr[0]->d_addr = dest_eth_addr[dst_port[0]];
- *(uint64_t *)ð_hdr[1]->d_addr = dest_eth_addr[dst_port[1]];
- *(uint64_t *)ð_hdr[2]->d_addr = dest_eth_addr[dst_port[2]];
- *(uint64_t *)ð_hdr[3]->d_addr = dest_eth_addr[dst_port[3]];
- *(uint64_t *)ð_hdr[4]->d_addr = dest_eth_addr[dst_port[4]];
- *(uint64_t *)ð_hdr[5]->d_addr = dest_eth_addr[dst_port[5]];
- *(uint64_t *)ð_hdr[6]->d_addr = dest_eth_addr[dst_port[6]];
- *(uint64_t *)ð_hdr[7]->d_addr = dest_eth_addr[dst_port[7]];
+ *(uint64_t *)ð_hdr[0]->dst_addr = dest_eth_addr[dst_port[0]];
+ *(uint64_t *)ð_hdr[1]->dst_addr = dest_eth_addr[dst_port[1]];
+ *(uint64_t *)ð_hdr[2]->dst_addr = dest_eth_addr[dst_port[2]];
+ *(uint64_t *)ð_hdr[3]->dst_addr = dest_eth_addr[dst_port[3]];
+ *(uint64_t *)ð_hdr[4]->dst_addr = dest_eth_addr[dst_port[4]];
+ *(uint64_t *)ð_hdr[5]->dst_addr = dest_eth_addr[dst_port[5]];
+ *(uint64_t *)ð_hdr[6]->dst_addr = dest_eth_addr[dst_port[6]];
+ *(uint64_t *)ð_hdr[7]->dst_addr = dest_eth_addr[dst_port[7]];
/* src addr */
- rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], ð_hdr[0]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], ð_hdr[1]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], ð_hdr[2]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], ð_hdr[3]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], ð_hdr[4]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], ð_hdr[5]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], ð_hdr[6]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], ð_hdr[7]->s_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], ð_hdr[0]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], ð_hdr[1]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], ð_hdr[2]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], ð_hdr[3]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], ð_hdr[4]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], ð_hdr[5]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], ð_hdr[6]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], ð_hdr[7]->src_addr);
send_single_packet(m[0], (uint8_t)dst_port[0]);
send_single_packet(m[1], (uint8_t)dst_port[1]);
@@ -1203,24 +1203,24 @@ simple_ipv6_fwd_8pkts(struct rte_mbuf *m[8], uint16_t portid)
dst_port[7] = portid;
/* dst addr */
- *(uint64_t *)ð_hdr[0]->d_addr = dest_eth_addr[dst_port[0]];
- *(uint64_t *)ð_hdr[1]->d_addr = dest_eth_addr[dst_port[1]];
- *(uint64_t *)ð_hdr[2]->d_addr = dest_eth_addr[dst_port[2]];
- *(uint64_t *)ð_hdr[3]->d_addr = dest_eth_addr[dst_port[3]];
- *(uint64_t *)ð_hdr[4]->d_addr = dest_eth_addr[dst_port[4]];
- *(uint64_t *)ð_hdr[5]->d_addr = dest_eth_addr[dst_port[5]];
- *(uint64_t *)ð_hdr[6]->d_addr = dest_eth_addr[dst_port[6]];
- *(uint64_t *)ð_hdr[7]->d_addr = dest_eth_addr[dst_port[7]];
+ *(uint64_t *)ð_hdr[0]->dst_addr = dest_eth_addr[dst_port[0]];
+ *(uint64_t *)ð_hdr[1]->dst_addr = dest_eth_addr[dst_port[1]];
+ *(uint64_t *)ð_hdr[2]->dst_addr = dest_eth_addr[dst_port[2]];
+ *(uint64_t *)ð_hdr[3]->dst_addr = dest_eth_addr[dst_port[3]];
+ *(uint64_t *)ð_hdr[4]->dst_addr = dest_eth_addr[dst_port[4]];
+ *(uint64_t *)ð_hdr[5]->dst_addr = dest_eth_addr[dst_port[5]];
+ *(uint64_t *)ð_hdr[6]->dst_addr = dest_eth_addr[dst_port[6]];
+ *(uint64_t *)ð_hdr[7]->dst_addr = dest_eth_addr[dst_port[7]];
/* src addr */
- rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], ð_hdr[0]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], ð_hdr[1]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], ð_hdr[2]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], ð_hdr[3]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], ð_hdr[4]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], ð_hdr[5]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], ð_hdr[6]->s_addr);
- rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], ð_hdr[7]->s_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], ð_hdr[0]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], ð_hdr[1]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], ð_hdr[2]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], ð_hdr[3]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], ð_hdr[4]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], ð_hdr[5]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], ð_hdr[6]->src_addr);
+ rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], ð_hdr[7]->src_addr);
send_single_packet(m[0], dst_port[0]);
send_single_packet(m[1], dst_port[1]);
@@ -1268,11 +1268,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid)
++(ipv4_hdr->hdr_checksum);
#endif
/* dst addr */
- *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port];
+ *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[dst_port];
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[dst_port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
send_single_packet(m, dst_port);
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -1290,11 +1290,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid)
dst_port = portid;
/* dst addr */
- *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port];
+ *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[dst_port];
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[dst_port],
- ð_hdr->s_addr);
+ ð_hdr->src_addr);
send_single_packet(m, dst_port);
} else
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fb..61e4ee0ea1 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -426,10 +426,10 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
created_pkt->data_len = pkt_size;
created_pkt->pkt_len = pkt_size;
eth_hdr = rte_pktmbuf_mtod(created_pkt, struct rte_ether_hdr *);
- rte_ether_addr_copy(ð_addr, ð_hdr->s_addr);
+ rte_ether_addr_copy(ð_addr, ð_hdr->src_addr);
/* Set multicast address 01-1B-19-00-00-00. */
- rte_ether_addr_copy(ð_multicast, ð_hdr->d_addr);
+ rte_ether_addr_copy(ð_multicast, ð_hdr->dst_addr);
eth_hdr->ether_type = htons(PTP_PROTOCOL);
ptp_msg = (struct ptp_message *)
@@ -449,14 +449,14 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
client_clkid =
&ptp_msg->delay_req.hdr.source_port_id.clock_id;
- client_clkid->id[0] = eth_hdr->s_addr.addr_bytes[0];
- client_clkid->id[1] = eth_hdr->s_addr.addr_bytes[1];
- client_clkid->id[2] = eth_hdr->s_addr.addr_bytes[2];
+ client_clkid->id[0] = eth_hdr->src_addr.addr_bytes[0];
+ client_clkid->id[1] = eth_hdr->src_addr.addr_bytes[1];
+ client_clkid->id[2] = eth_hdr->src_addr.addr_bytes[2];
client_clkid->id[3] = 0xFF;
client_clkid->id[4] = 0xFE;
- client_clkid->id[5] = eth_hdr->s_addr.addr_bytes[3];
- client_clkid->id[6] = eth_hdr->s_addr.addr_bytes[4];
- client_clkid->id[7] = eth_hdr->s_addr.addr_bytes[5];
+ client_clkid->id[5] = eth_hdr->src_addr.addr_bytes[3];
+ client_clkid->id[6] = eth_hdr->src_addr.addr_bytes[4];
+ client_clkid->id[7] = eth_hdr->src_addr.addr_bytes[5];
rte_memcpy(&ptp_data->client_clock_id,
client_clkid,
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d0bf1f31e3..b24fd82a6e 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -757,7 +757,7 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf *m)
/* Learn MAC address of guest device from packet */
pkt_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
- if (find_vhost_dev(&pkt_hdr->s_addr)) {
+ if (find_vhost_dev(&pkt_hdr->src_addr)) {
RTE_LOG(ERR, VHOST_DATA,
"(%d) device is using a registered MAC!\n",
vdev->vid);
@@ -765,7 +765,8 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf *m)
}
for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
- vdev->mac_address.addr_bytes[i] = pkt_hdr->s_addr.addr_bytes[i];
+ vdev->mac_address.addr_bytes[i] =
+ pkt_hdr->src_addr.addr_bytes[i];
/* vlan_tag currently uses the device_id. */
vdev->vlan_tag = vlan_tags[vdev->vid];
@@ -945,7 +946,7 @@ virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m)
uint16_t lcore_id = rte_lcore_id();
pkt_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
- dst_vdev = find_vhost_dev(&pkt_hdr->d_addr);
+ dst_vdev = find_vhost_dev(&pkt_hdr->dst_addr);
if (!dst_vdev)
return -1;
@@ -993,7 +994,7 @@ find_local_dest(struct vhost_dev *vdev, struct rte_mbuf *m,
struct rte_ether_hdr *pkt_hdr =
rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
- dst_vdev = find_vhost_dev(&pkt_hdr->d_addr);
+ dst_vdev = find_vhost_dev(&pkt_hdr->dst_addr);
if (!dst_vdev)
return 0;
@@ -1076,7 +1077,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
nh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
- if (unlikely(rte_is_broadcast_ether_addr(&nh->d_addr))) {
+ if (unlikely(rte_is_broadcast_ether_addr(&nh->dst_addr))) {
struct vhost_dev *vdev2;
TAILQ_FOREACH(vdev2, &vhost_dev_list, global_vdev_entry) {
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index 755dcafa2f..ee7f4324e1 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -461,11 +461,11 @@ update_mac_address(struct rte_mbuf *m, unsigned dst_port)
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
/* 02:00:00:00:00:xx */
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
/* src addr */
- rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], ð->s_addr);
+ rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], ð->src_addr);
}
/* When we receive a HUP signal, print out our stats */
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 6d3c918d6d..14c20e6a8b 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -512,11 +512,11 @@ update_mac_address(struct rte_mbuf *m, unsigned dst_port)
eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
/* 02:00:00:00:00:xx */
- tmp = ð->d_addr.addr_bytes[0];
+ tmp = ð->dst_addr.addr_bytes[0];
*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
/* src addr */
- rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], ð->s_addr);
+ rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], ð->src_addr);
}
/* When we receive a HUP signal, print out our stats */
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..a89945061a 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -785,8 +785,8 @@ struct rte_flow_item_eth {
/** Default mask for RTE_FLOW_ITEM_TYPE_ETH. */
#ifndef __cplusplus
static const struct rte_flow_item_eth rte_flow_item_eth_mask = {
- .hdr.d_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
- .hdr.s_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+ .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+ .hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
.hdr.ether_type = RTE_BE16(0x0000),
};
#endif
diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c
index feb5855144..aff22178e3 100644
--- a/lib/gro/gro_tcp4.c
+++ b/lib/gro/gro_tcp4.c
@@ -243,8 +243,8 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
ip_id = is_atomic ? 0 : rte_be_to_cpu_16(ipv4_hdr->packet_id);
sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
- rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.eth_saddr));
- rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.eth_daddr));
+ rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.eth_saddr));
+ rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.eth_daddr));
key.ip_src_addr = ipv4_hdr->src_addr;
key.ip_dst_addr = ipv4_hdr->dst_addr;
key.src_port = tcp_hdr->src_port;
diff --git a/lib/gro/gro_udp4.c b/lib/gro/gro_udp4.c
index b8301296df..e78dda7874 100644
--- a/lib/gro/gro_udp4.c
+++ b/lib/gro/gro_udp4.c
@@ -238,8 +238,8 @@ gro_udp4_reassemble(struct rte_mbuf *pkt,
is_last_frag = ((frag_offset & RTE_IPV4_HDR_MF_FLAG) == 0) ? 1 : 0;
frag_offset = (uint16_t)(frag_offset & RTE_IPV4_HDR_OFFSET_MASK) << 3;
- rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.eth_saddr));
- rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.eth_daddr));
+ rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.eth_saddr));
+ rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.eth_daddr));
key.ip_src_addr = ipv4_hdr->src_addr;
key.ip_dst_addr = ipv4_hdr->dst_addr;
key.ip_id = ip_id;
diff --git a/lib/gro/gro_vxlan_tcp4.c b/lib/gro/gro_vxlan_tcp4.c
index f3b6e603b9..2005899afe 100644
--- a/lib/gro/gro_vxlan_tcp4.c
+++ b/lib/gro/gro_vxlan_tcp4.c
@@ -358,8 +358,8 @@ gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt,
sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
- rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.inner_key.eth_saddr));
- rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.inner_key.eth_daddr));
+ rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.inner_key.eth_saddr));
+ rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.inner_key.eth_daddr));
key.inner_key.ip_src_addr = ipv4_hdr->src_addr;
key.inner_key.ip_dst_addr = ipv4_hdr->dst_addr;
key.inner_key.recv_ack = tcp_hdr->recv_ack;
@@ -368,8 +368,8 @@ gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt,
key.vxlan_hdr.vx_flags = vxlan_hdr->vx_flags;
key.vxlan_hdr.vx_vni = vxlan_hdr->vx_vni;
- rte_ether_addr_copy(&(outer_eth_hdr->s_addr), &(key.outer_eth_saddr));
- rte_ether_addr_copy(&(outer_eth_hdr->d_addr), &(key.outer_eth_daddr));
+ rte_ether_addr_copy(&(outer_eth_hdr->src_addr), &(key.outer_eth_saddr));
+ rte_ether_addr_copy(&(outer_eth_hdr->dst_addr), &(key.outer_eth_daddr));
key.outer_ip_src_addr = outer_ipv4_hdr->src_addr;
key.outer_ip_dst_addr = outer_ipv4_hdr->dst_addr;
key.outer_src_port = udp_hdr->src_port;
diff --git a/lib/gro/gro_vxlan_udp4.c b/lib/gro/gro_vxlan_udp4.c
index 37476361d5..4767c910bb 100644
--- a/lib/gro/gro_vxlan_udp4.c
+++ b/lib/gro/gro_vxlan_udp4.c
@@ -338,16 +338,16 @@ gro_vxlan_udp4_reassemble(struct rte_mbuf *pkt,
is_last_frag = ((frag_offset & RTE_IPV4_HDR_MF_FLAG) == 0) ? 1 : 0;
frag_offset = (uint16_t)(frag_offset & RTE_IPV4_HDR_OFFSET_MASK) << 3;
- rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.inner_key.eth_saddr));
- rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.inner_key.eth_daddr));
+ rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.inner_key.eth_saddr));
+ rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.inner_key.eth_daddr));
key.inner_key.ip_src_addr = ipv4_hdr->src_addr;
key.inner_key.ip_dst_addr = ipv4_hdr->dst_addr;
key.inner_key.ip_id = ip_id;
key.vxlan_hdr.vx_flags = vxlan_hdr->vx_flags;
key.vxlan_hdr.vx_vni = vxlan_hdr->vx_vni;
- rte_ether_addr_copy(&(outer_eth_hdr->s_addr), &(key.outer_eth_saddr));
- rte_ether_addr_copy(&(outer_eth_hdr->d_addr), &(key.outer_eth_daddr));
+ rte_ether_addr_copy(&(outer_eth_hdr->src_addr), &(key.outer_eth_saddr));
+ rte_ether_addr_copy(&(outer_eth_hdr->dst_addr), &(key.outer_eth_daddr));
key.outer_ip_src_addr = outer_ipv4_hdr->src_addr;
key.outer_ip_dst_addr = outer_ipv4_hdr->dst_addr;
/* Note: It is unnecessary to save outer_src_port here because it can
diff --git a/lib/net/rte_arp.c b/lib/net/rte_arp.c
index 5c1e27b8c0..9f7eb6b375 100644
--- a/lib/net/rte_arp.c
+++ b/lib/net/rte_arp.c
@@ -29,8 +29,8 @@ rte_net_make_rarp_packet(struct rte_mempool *mpool,
}
/* Ethernet header. */
- memset(eth_hdr->d_addr.addr_bytes, 0xff, RTE_ETHER_ADDR_LEN);
- rte_ether_addr_copy(mac, ð_hdr->s_addr);
+ memset(eth_hdr->dst_addr.addr_bytes, 0xff, RTE_ETHER_ADDR_LEN);
+ rte_ether_addr_copy(mac, ð_hdr->src_addr);
eth_hdr->ether_type = RTE_BE16(RTE_ETHER_TYPE_RARP);
/* RARP header. */
diff --git a/lib/net/rte_ether.h b/lib/net/rte_ether.h
index 5f38b41dd4..b83e0d3fce 100644
--- a/lib/net/rte_ether.h
+++ b/lib/net/rte_ether.h
@@ -266,34 +266,16 @@ rte_ether_format_addr(char *buf, uint16_t size,
int
rte_ether_unformat_addr(const char *str, struct rte_ether_addr *eth_addr);
-/* Windows Sockets headers contain `#define s_addr S_un.S_addr`.
- * Temporarily disable this macro to avoid conflict at definition.
- * Place source MAC address in both `s_addr` and `S_un.S_addr` fields,
- * so that access works either directly or through the macro.
- */
-#pragma push_macro("s_addr")
-#ifdef s_addr
-#undef s_addr
-#endif
-
/**
* Ethernet header: Contains the destination address, source address
* and frame type.
*/
struct rte_ether_hdr {
- struct rte_ether_addr d_addr; /**< Destination address. */
- RTE_STD_C11
- union {
- struct rte_ether_addr s_addr; /**< Source address. */
- struct {
- struct rte_ether_addr S_addr;
- } S_un; /**< Do not use directly; use s_addr instead.*/
- };
+ struct rte_ether_addr dst_addr; /**< Destination address. */
+ struct rte_ether_addr src_addr; /**< Source address. */
rte_be16_t ether_type; /**< Frame type. */
} __rte_aligned(2);
-#pragma pop_macro("s_addr")
-
/**
* Ethernet VLAN Header.
* Contains the 16-bit VLAN Tag Control Identifier and the Ethernet type
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index ad7904c0ee..4b0316bfed 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -615,8 +615,8 @@ encap_ether_apply(void *data,
RTE_ETHER_TYPE_IPV6;
/* Ethernet */
- rte_ether_addr_copy(&p->ether.ether.da, &d->ether.d_addr);
- rte_ether_addr_copy(&p->ether.ether.sa, &d->ether.s_addr);
+ rte_ether_addr_copy(&p->ether.ether.da, &d->ether.dst_addr);
+ rte_ether_addr_copy(&p->ether.ether.sa, &d->ether.src_addr);
d->ether.ether_type = rte_htons(ethertype);
return 0;
@@ -633,8 +633,8 @@ encap_vlan_apply(void *data,
RTE_ETHER_TYPE_IPV6;
/* Ethernet */
- rte_ether_addr_copy(&p->vlan.ether.da, &d->ether.d_addr);
- rte_ether_addr_copy(&p->vlan.ether.sa, &d->ether.s_addr);
+ rte_ether_addr_copy(&p->vlan.ether.da, &d->ether.dst_addr);
+ rte_ether_addr_copy(&p->vlan.ether.sa, &d->ether.src_addr);
d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
/* VLAN */
@@ -657,8 +657,8 @@ encap_qinq_apply(void *data,
RTE_ETHER_TYPE_IPV6;
/* Ethernet */
- rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.d_addr);
- rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.s_addr);
+ rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.dst_addr);
+ rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.src_addr);
d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_QINQ);
/* SVLAN */
@@ -683,8 +683,8 @@ encap_qinq_pppoe_apply(void *data,
struct encap_qinq_pppoe_data *d = data;
/* Ethernet */
- rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.d_addr);
- rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.s_addr);
+ rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.dst_addr);
+ rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.src_addr);
d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
/* SVLAN */
@@ -719,8 +719,8 @@ encap_mpls_apply(void *data,
uint32_t i;
/* Ethernet */
- rte_ether_addr_copy(&p->mpls.ether.da, &d->ether.d_addr);
- rte_ether_addr_copy(&p->mpls.ether.sa, &d->ether.s_addr);
+ rte_ether_addr_copy(&p->mpls.ether.da, &d->ether.dst_addr);
+ rte_ether_addr_copy(&p->mpls.ether.sa, &d->ether.src_addr);
d->ether.ether_type = rte_htons(ethertype);
/* MPLS */
@@ -746,8 +746,8 @@ encap_pppoe_apply(void *data,
struct encap_pppoe_data *d = data;
/* Ethernet */
- rte_ether_addr_copy(&p->pppoe.ether.da, &d->ether.d_addr);
- rte_ether_addr_copy(&p->pppoe.ether.sa, &d->ether.s_addr);
+ rte_ether_addr_copy(&p->pppoe.ether.da, &d->ether.dst_addr);
+ rte_ether_addr_copy(&p->pppoe.ether.sa, &d->ether.src_addr);
d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_PPPOE_SESSION);
/* PPPoE and PPP*/
@@ -777,9 +777,9 @@ encap_vxlan_apply(void *data,
/* Ethernet */
rte_ether_addr_copy(&p->vxlan.ether.da,
- &d->ether.d_addr);
+ &d->ether.dst_addr);
rte_ether_addr_copy(&p->vxlan.ether.sa,
- &d->ether.s_addr);
+ &d->ether.src_addr);
d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
/* VLAN */
@@ -818,9 +818,9 @@ encap_vxlan_apply(void *data,
/* Ethernet */
rte_ether_addr_copy(&p->vxlan.ether.da,
- &d->ether.d_addr);
+ &d->ether.dst_addr);
rte_ether_addr_copy(&p->vxlan.ether.sa,
- &d->ether.s_addr);
+ &d->ether.src_addr);
d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_IPV4);
/* IPv4*/
@@ -855,9 +855,9 @@ encap_vxlan_apply(void *data,
/* Ethernet */
rte_ether_addr_copy(&p->vxlan.ether.da,
- &d->ether.d_addr);
+ &d->ether.dst_addr);
rte_ether_addr_copy(&p->vxlan.ether.sa,
- &d->ether.s_addr);
+ &d->ether.src_addr);
d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
/* VLAN */
@@ -896,9 +896,9 @@ encap_vxlan_apply(void *data,
/* Ethernet */
rte_ether_addr_copy(&p->vxlan.ether.da,
- &d->ether.d_addr);
+ &d->ether.dst_addr);
rte_ether_addr_copy(&p->vxlan.ether.sa,
- &d->ether.s_addr);
+ &d->ether.src_addr);
d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_IPV6);
/* IPv6*/
--
2.29.3
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
2021-10-07 16:49 0% ` Nipun Gupta
@ 2021-10-07 18:58 0% ` Chautru, Nicolas
2021-10-08 4:34 0% ` Nipun Gupta
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2021-10-07 18:58 UTC (permalink / raw)
To: Nipun Gupta, Akhil Goyal, dev, trix
Cc: thomas, Zhang, Mingshan, Joshi, Arun, Hemant Agrawal, david.marchand
Hi Nipun,
> -----Original Message-----
> From: Nipun Gupta <nipun.gupta@nxp.com>
> Sent: Thursday, October 7, 2021 9:49 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>; Akhil Goyal
> <gakhil@marvell.com>; dev@dpdk.org; trix@redhat.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> endianness assumption
>
>
>
> > -----Original Message-----
> > From: Chautru, Nicolas <nicolas.chautru@intel.com>
> > Sent: Thursday, October 7, 2021 9:12 PM
> > To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org; Nipun Gupta
> > <nipun.gupta@nxp.com>; trix@redhat.com
> > Cc: thomas@monjalon.net; Zhang, Mingshan
> <mingshan.zhang@intel.com>;
> > Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> > <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > endianness assumption
> >
> > Hi Akhil,
> >
> >
> > > -----Original Message-----
> > > From: Akhil Goyal <gakhil@marvell.com>
> > > Sent: Thursday, October 7, 2021 6:14 AM
> > > To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> > > nipun.gupta@nxp.com; trix@redhat.com
> > > Cc: thomas@monjalon.net; Zhang, Mingshan
> <mingshan.zhang@intel.com>;
> > > Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> > > david.marchand@redhat.com
> > > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > > endianness assumption
> > >
> > > > Subject: [EXT] [PATCH v9] bbdev: add device info related to data
> > > > endianness assumption
> > > >
> > > Title is too long.
> > > bbdev: add dev info for data endianness
> >
> > OK
> >
> > >
> > > > Adding device information to capture explicitly the assumption of
> > > > the input/output data byte endianness being processed.
> > > >
> > > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > > ---
> > > > doc/guides/rel_notes/release_21_11.rst | 1 +
> > > > drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
> > > > drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> > > > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
> > > > drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
> > > > lib/bbdev/rte_bbdev.h | 8 ++++++++
> > > > 6 files changed, 13 insertions(+)
> > > >
> > > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > > b/doc/guides/rel_notes/release_21_11.rst
> > > > index a8900a3..f0b3006 100644
> > > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > > @@ -191,6 +191,7 @@ API Changes
> > > >
> > > > * bbdev: Added capability related to more comprehensive CRC
> options.
> > > >
> > > > +* bbdev: Added device info related to data byte endianness
> > > > +processing
> > > > assumption.
> > >
> > > It is not clear from the description or the release notes, what the
> > > application is supposed to do based on the new dev_info field set
> > > and how the driver determine what value to set?
> > > Isn't there a standard from the application stand point that the
> > > input/output data Should be in BE or in LE like in case of IP packets which
> are always in BE?
> > > I mean why is it dependent on the PMD which is processing it?
> > > Whatever application understands, PMD should comply with that and do
> > > internal Swapping if it does not support it.
> > > Am I missing something?
> >
> > This is really to allow Nipin to add his own NXP la12xx PMD, which
> > appears to have different assumption on endianness.
> > All existing processing is done in LE by default by the existing PMDs
> > and the existing ecosystem.
> > I cannot comment on why they would want to do that for the la12xx
> > specifically, I could only speculate but here trying to help to find
> > the best way for the new PMD to be supported.
> > So here this suggested change is purely about exposing different
> > assumption for the PMDs, so that this new PMD can still be supported
> > under this API even though this is in effect incompatible with existing
> ecosystem.
> > In case the application has different assumption that what the PMD
> > does, then byte swapping would have to be done in the application,
> > more likely I assume that la12xx has its own ecosystem with different
> > endianness required for other reasons.
> > The option you are suggesting would be to put the burden on the PMD
> > but I doubt there is an actual usecase for that. I assume they assume
> > different endianness for other specific reason, not necessary to be
> > compatible with existing ecosystem.
> > Niping, Hemant, feel free to comment back, from previous discussion I
> > believe this is what you wanted to do. Unsure of the reason, feel free
> > to share more details or not.
>
> Akhil/Nicolas,
>
> As Hemant mentioned on v4 (previously asked by Dave)
>
> "---
> If we go back to the data providing source i.e. FAPI interface, it is
> implementation specific, as per SCF222.
>
> Our customers do use BE data in network and at FAPI interface.
>
> In LA12xx, at present, we use u8 Big-endian data for processing to FECA
> engine. We do see that other drivers in DPDK are using Little Endian *(with
> u32 data)* but standards is open for both.
> "---
>
> Standard is not specific to endianness and is open for implementation.
> So it does not makes a reason to have one endianness as default and other
> managed in the PMD, and the current change seems right.
>
> Yes endianness assumption is taken in the test vector input/output data, but
> this should be acceptable as it does not impact the PMD's and end user
> applications in general.
I want clarify that this would impact the application in case user wanted to switch between 2 such hw accelator.
Ie. you cannot switch the 2 solutions, they are incompatible except if you explicitly do the byteswap in the application (as is done in bbdev-test).
Not necessarily a problem in case they address 2 different ecosystems but capturing the implication to be explicit. Ie each device expose the assumptions expected by the application and it is up to the application the bbdev api to satisfy the relared assumptions.
>
> BTW Nicolos, my name is Nipun :)
My bad!
I am marking this patch as obsolete since you have included it in your serie.
>
> >
> >
> > >
> > > >
> > > > ABI Changes
> > > > -----------
> > > > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > index 4e2feef..eb2c6c1 100644
> > > > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > @@ -1089,6 +1089,7 @@
> > > > #else
> > > > dev_info->harq_buffer_size = 0;
> > > > #endif
> > > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > acc100_check_ir(d);
> > > > }
> > > >
> > > > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > index 6485cc8..c7f15c0 100644
> > > > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > @@ -372,6 +372,7 @@
> > > > dev_info->default_queue_conf = default_queue_conf;
> > > > dev_info->capabilities = bbdev_capabilities;
> > > > dev_info->cpu_flag_reqs = NULL;
> > > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > >
> > > > /* Calculates number of queues assigned to device */
> > > > dev_info->max_num_queues = 0;
> > > > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > index 350c424..72e213e 100644
> > > > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> > > > dev_info->default_queue_conf = default_queue_conf;
> > > > dev_info->capabilities = bbdev_capabilities;
> > > > dev_info->cpu_flag_reqs = NULL;
> > > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > >
> > > > /* Calculates number of queues assigned to device */
> > > > dev_info->max_num_queues = 0;
> > > > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > index e1db2bf..0cab91a 100644
> > > > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> > > > dev_info->capabilities = bbdev_capabilities;
> > > > dev_info->min_alignment = 64;
> > > > dev_info->harq_buffer_size = 0;
> > > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > >
> > > > rte_bbdev_log_debug("got device info from %u\n", dev->data-
> > > > >dev_id);
> > > > }
> > > > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > > > 3ebf62e..b3f3000 100644
> > > > --- a/lib/bbdev/rte_bbdev.h
> > > > +++ b/lib/bbdev/rte_bbdev.h
> > > > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> > > > RTE_BBDEV_INITIALIZED
> > > > };
> > > >
> > > > +/** Definitions of device data byte endianness types */ enum
> > > > +rte_bbdev_endianness {
> > > > + RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */
> > > > + RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ };
> > > If at all be need this dev_info field, as Tom suggested we should
> > > use RTE_BIG/LITTLE_ENDIAN.
> >
> > See separate comment on my reply to Tom:
> > I considered this but the usage is different, these are build time
> > #define, and really would bring confusion here.
> > Note that there are not really the endianness of the system itself but
> > specific to the bbdev data output going through signal processing.
> > I thought it was more explicit and less confusing this way, feel free
> > to comment back.
> > NXP would know best why a different endianness would be required in the
> PMD.
>
> Please see previous comment for endianness support.
> I agree with the RTE_ prefix we can add it as it is for the application interface.
>
> >
> > >
> > > > +
> > > > /**
> > > > * Get the total number of devices that have been successfully
> initialised.
> > > > *
> > > > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> > > > uint16_t min_alignment;
> > > > /** HARQ memory available in kB */
> > > > uint32_t harq_buffer_size;
> > > > + /** Byte endianness assumption for input/output data */
> > > > + enum rte_bbdev_endianness data_endianness;
> > >
> > > We should define how the input and output data are expected from the
> app.
> > > If need be, we can define a simple ``bool swap`` instead of an enum.
> >
> > This could be done as well. Default no swap, and swap required for the
> > new PMD.
> > I will let Nipin/Hemant comment back.
>
> Again endianness is implementation specific and not standard for 5G
> processing, unlike it is for network packet.
>
> Regards,
> Nipun
>
> >
> > >
> > > > /** Default queue configuration used if none is supplied */
> > > > struct rte_bbdev_queue_conf default_queue_conf;
> > > > /** Device operation capabilities */
> > > > --
> > > > 1.8.3.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
2021-10-07 15:41 0% ` Chautru, Nicolas
@ 2021-10-07 16:49 0% ` Nipun Gupta
2021-10-07 18:58 0% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Nipun Gupta @ 2021-10-07 16:49 UTC (permalink / raw)
To: Chautru, Nicolas, Akhil Goyal, dev, trix
Cc: thomas, Zhang, Mingshan, Joshi, Arun, Hemant Agrawal, david.marchand
> -----Original Message-----
> From: Chautru, Nicolas <nicolas.chautru@intel.com>
> Sent: Thursday, October 7, 2021 9:12 PM
> To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org; Nipun Gupta
> <nipun.gupta@nxp.com>; trix@redhat.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data endianness
> assumption
>
> Hi Akhil,
>
>
> > -----Original Message-----
> > From: Akhil Goyal <gakhil@marvell.com>
> > Sent: Thursday, October 7, 2021 6:14 AM
> > To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> > nipun.gupta@nxp.com; trix@redhat.com
> > Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> > Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> > david.marchand@redhat.com
> > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > endianness assumption
> >
> > > Subject: [EXT] [PATCH v9] bbdev: add device info related to data
> > > endianness assumption
> > >
> > Title is too long.
> > bbdev: add dev info for data endianness
>
> OK
>
> >
> > > Adding device information to capture explicitly the assumption of the
> > > input/output data byte endianness being processed.
> > >
> > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > ---
> > > doc/guides/rel_notes/release_21_11.rst | 1 +
> > > drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
> > > drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> > > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
> > > drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
> > > lib/bbdev/rte_bbdev.h | 8 ++++++++
> > > 6 files changed, 13 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > b/doc/guides/rel_notes/release_21_11.rst
> > > index a8900a3..f0b3006 100644
> > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > @@ -191,6 +191,7 @@ API Changes
> > >
> > > * bbdev: Added capability related to more comprehensive CRC options.
> > >
> > > +* bbdev: Added device info related to data byte endianness processing
> > > assumption.
> >
> > It is not clear from the description or the release notes, what the application
> > is supposed to do based on the new dev_info field set and how the driver
> > determine what value to set?
> > Isn't there a standard from the application stand point that the input/output
> > data Should be in BE or in LE like in case of IP packets which are always in BE?
> > I mean why is it dependent on the PMD which is processing it?
> > Whatever application understands, PMD should comply with that and do
> > internal Swapping if it does not support it.
> > Am I missing something?
>
> This is really to allow Nipin to add his own NXP la12xx PMD, which appears to
> have different assumption on endianness.
> All existing processing is done in LE by default by the existing PMDs and the
> existing ecosystem.
> I cannot comment on why they would want to do that for the la12xx specifically,
> I could only speculate but here trying to help to find the best way for the new
> PMD to be supported.
> So here this suggested change is purely about exposing different assumption for
> the PMDs, so that this new PMD can still be supported under this API even
> though this is in effect incompatible with existing ecosystem.
> In case the application has different assumption that what the PMD does, then
> byte swapping would have to be done in the application, more likely I assume
> that la12xx has its own ecosystem with different endianness required for other
> reasons.
> The option you are suggesting would be to put the burden on the PMD but I
> doubt there is an actual usecase for that. I assume they assume different
> endianness for other specific reason, not necessary to be compatible with
> existing ecosystem.
> Niping, Hemant, feel free to comment back, from previous discussion I believe
> this is what you wanted to do. Unsure of the reason, feel free to share more
> details or not.
Akhil/Nicolas,
As Hemant mentioned on v4 (previously asked by Dave)
"---
If we go back to the data providing source i.e. FAPI interface, it is
implementation specific, as per SCF222.
Our customers do use BE data in network and at FAPI interface.
In LA12xx, at present, we use u8 Big-endian data for processing to FECA
engine. We do see that other drivers in DPDK are using Little Endian
*(with u32 data)* but standards is open for both.
"---
Standard is not specific to endianness and is open for implementation.
So it does not makes a reason to have one endianness as default and
other managed in the PMD, and the current change seems right.
Yes endianness assumption is taken in the test vector input/output data,
but this should be acceptable as it does not impact the PMD's and
end user applications in general.
BTW Nicolos, my name is Nipun :)
>
>
> >
> > >
> > > ABI Changes
> > > -----------
> > > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > index 4e2feef..eb2c6c1 100644
> > > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > @@ -1089,6 +1089,7 @@
> > > #else
> > > dev_info->harq_buffer_size = 0;
> > > #endif
> > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > acc100_check_ir(d);
> > > }
> > >
> > > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > index 6485cc8..c7f15c0 100644
> > > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > @@ -372,6 +372,7 @@
> > > dev_info->default_queue_conf = default_queue_conf;
> > > dev_info->capabilities = bbdev_capabilities;
> > > dev_info->cpu_flag_reqs = NULL;
> > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > >
> > > /* Calculates number of queues assigned to device */
> > > dev_info->max_num_queues = 0;
> > > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > index 350c424..72e213e 100644
> > > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> > > dev_info->default_queue_conf = default_queue_conf;
> > > dev_info->capabilities = bbdev_capabilities;
> > > dev_info->cpu_flag_reqs = NULL;
> > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > >
> > > /* Calculates number of queues assigned to device */
> > > dev_info->max_num_queues = 0;
> > > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > index e1db2bf..0cab91a 100644
> > > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> > > dev_info->capabilities = bbdev_capabilities;
> > > dev_info->min_alignment = 64;
> > > dev_info->harq_buffer_size = 0;
> > > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > >
> > > rte_bbdev_log_debug("got device info from %u\n", dev->data-
> > > >dev_id);
> > > }
> > > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > > 3ebf62e..b3f3000 100644
> > > --- a/lib/bbdev/rte_bbdev.h
> > > +++ b/lib/bbdev/rte_bbdev.h
> > > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> > > RTE_BBDEV_INITIALIZED
> > > };
> > >
> > > +/** Definitions of device data byte endianness types */ enum
> > > +rte_bbdev_endianness {
> > > + RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */
> > > + RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ };
> > If at all be need this dev_info field,
> > as Tom suggested we should use RTE_BIG/LITTLE_ENDIAN.
>
> See separate comment on my reply to Tom:
> I considered this but the usage is different, these are build time #define, and
> really would bring confusion here.
> Note that there are not really the endianness of the system itself but specific to
> the bbdev data output going through signal processing.
> I thought it was more explicit and less confusing this way, feel free to comment
> back.
> NXP would know best why a different endianness would be required in the PMD.
Please see previous comment for endianness support.
I agree with the RTE_ prefix we can add it as it is for the application interface.
>
> >
> > > +
> > > /**
> > > * Get the total number of devices that have been successfully initialised.
> > > *
> > > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> > > uint16_t min_alignment;
> > > /** HARQ memory available in kB */
> > > uint32_t harq_buffer_size;
> > > + /** Byte endianness assumption for input/output data */
> > > + enum rte_bbdev_endianness data_endianness;
> >
> > We should define how the input and output data are expected from the app.
> > If need be, we can define a simple ``bool swap`` instead of an enum.
>
> This could be done as well. Default no swap, and swap required for the new
> PMD.
> I will let Nipin/Hemant comment back.
Again endianness is implementation specific and not standard for 5G processing,
unlike it is for network packet.
Regards,
Nipun
>
> >
> > > /** Default queue configuration used if none is supplied */
> > > struct rte_bbdev_queue_conf default_queue_conf;
> > > /** Device operation capabilities */
> > > --
> > > 1.8.3.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
2021-10-07 13:13 0% ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-10-07 15:41 0% ` Chautru, Nicolas
2021-10-07 16:49 0% ` Nipun Gupta
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2021-10-07 15:41 UTC (permalink / raw)
To: Akhil Goyal, dev, nipun.gupta, trix
Cc: thomas, Zhang, Mingshan, Joshi, Arun, hemant.agrawal, david.marchand
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Thursday, October 7, 2021 6:14 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> nipun.gupta@nxp.com; trix@redhat.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> david.marchand@redhat.com
> Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> endianness assumption
>
> > Subject: [EXT] [PATCH v9] bbdev: add device info related to data
> > endianness assumption
> >
> Title is too long.
> bbdev: add dev info for data endianness
OK
>
> > Adding device information to capture explicitly the assumption of the
> > input/output data byte endianness being processed.
> >
> > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > ---
> > doc/guides/rel_notes/release_21_11.rst | 1 +
> > drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
> > drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
> > drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
> > lib/bbdev/rte_bbdev.h | 8 ++++++++
> > 6 files changed, 13 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > b/doc/guides/rel_notes/release_21_11.rst
> > index a8900a3..f0b3006 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -191,6 +191,7 @@ API Changes
> >
> > * bbdev: Added capability related to more comprehensive CRC options.
> >
> > +* bbdev: Added device info related to data byte endianness processing
> > assumption.
>
> It is not clear from the description or the release notes, what the application
> is supposed to do based on the new dev_info field set and how the driver
> determine what value to set?
> Isn't there a standard from the application stand point that the input/output
> data Should be in BE or in LE like in case of IP packets which are always in BE?
> I mean why is it dependent on the PMD which is processing it?
> Whatever application understands, PMD should comply with that and do
> internal Swapping if it does not support it.
> Am I missing something?
This is really to allow Nipin to add his own NXP la12xx PMD, which appears to have different assumption on endianness.
All existing processing is done in LE by default by the existing PMDs and the existing ecosystem.
I cannot comment on why they would want to do that for the la12xx specifically, I could only speculate but here trying to help to find the best way for the new PMD to be supported.
So here this suggested change is purely about exposing different assumption for the PMDs, so that this new PMD can still be supported under this API even though this is in effect incompatible with existing ecosystem.
In case the application has different assumption that what the PMD does, then byte swapping would have to be done in the application, more likely I assume that la12xx has its own ecosystem with different endianness required for other reasons.
The option you are suggesting would be to put the burden on the PMD but I doubt there is an actual usecase for that. I assume they assume different endianness for other specific reason, not necessary to be compatible with existing ecosystem.
Niping, Hemant, feel free to comment back, from previous discussion I believe this is what you wanted to do. Unsure of the reason, feel free to share more details or not.
>
> >
> > ABI Changes
> > -----------
> > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > index 4e2feef..eb2c6c1 100644
> > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > @@ -1089,6 +1089,7 @@
> > #else
> > dev_info->harq_buffer_size = 0;
> > #endif
> > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > acc100_check_ir(d);
> > }
> >
> > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > index 6485cc8..c7f15c0 100644
> > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > @@ -372,6 +372,7 @@
> > dev_info->default_queue_conf = default_queue_conf;
> > dev_info->capabilities = bbdev_capabilities;
> > dev_info->cpu_flag_reqs = NULL;
> > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> > /* Calculates number of queues assigned to device */
> > dev_info->max_num_queues = 0;
> > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > index 350c424..72e213e 100644
> > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> > dev_info->default_queue_conf = default_queue_conf;
> > dev_info->capabilities = bbdev_capabilities;
> > dev_info->cpu_flag_reqs = NULL;
> > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> > /* Calculates number of queues assigned to device */
> > dev_info->max_num_queues = 0;
> > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > index e1db2bf..0cab91a 100644
> > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> > dev_info->capabilities = bbdev_capabilities;
> > dev_info->min_alignment = 64;
> > dev_info->harq_buffer_size = 0;
> > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> > rte_bbdev_log_debug("got device info from %u\n", dev->data-
> > >dev_id);
> > }
> > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > 3ebf62e..b3f3000 100644
> > --- a/lib/bbdev/rte_bbdev.h
> > +++ b/lib/bbdev/rte_bbdev.h
> > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> > RTE_BBDEV_INITIALIZED
> > };
> >
> > +/** Definitions of device data byte endianness types */ enum
> > +rte_bbdev_endianness {
> > + RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */
> > + RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ };
> If at all be need this dev_info field,
> as Tom suggested we should use RTE_BIG/LITTLE_ENDIAN.
See separate comment on my reply to Tom:
I considered this but the usage is different, these are build time #define, and really would bring confusion here.
Note that there are not really the endianness of the system itself but specific to the bbdev data output going through signal processing.
I thought it was more explicit and less confusing this way, feel free to comment back.
NXP would know best why a different endianness would be required in the PMD.
>
> > +
> > /**
> > * Get the total number of devices that have been successfully initialised.
> > *
> > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> > uint16_t min_alignment;
> > /** HARQ memory available in kB */
> > uint32_t harq_buffer_size;
> > + /** Byte endianness assumption for input/output data */
> > + enum rte_bbdev_endianness data_endianness;
>
> We should define how the input and output data are expected from the app.
> If need be, we can define a simple ``bool swap`` instead of an enum.
This could be done as well. Default no swap, and swap required for the new PMD.
I will let Nipin/Hemant comment back.
>
> > /** Default queue configuration used if none is supplied */
> > struct rte_bbdev_queue_conf default_queue_conf;
> > /** Device operation capabilities */
> > --
> > 1.8.3.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v9] bbdev: add device info related to data endianness assumption
2021-10-07 12:01 0% ` Tom Rix
@ 2021-10-07 15:19 0% ` Chautru, Nicolas
0 siblings, 0 replies; 200+ results
From: Chautru, Nicolas @ 2021-10-07 15:19 UTC (permalink / raw)
To: Tom Rix, dev, gakhil, nipun.gupta
Cc: thomas, Zhang, Mingshan, Joshi, Arun, hemant.agrawal, david.marchand
Hi Tom,
> -----Original Message-----
> From: Tom Rix <trix@redhat.com>
> Sent: Thursday, October 7, 2021 5:01 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> gakhil@marvell.com; nipun.gupta@nxp.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> david.marchand@redhat.com
> Subject: Re: [PATCH v9] bbdev: add device info related to data endianness
> assumption
>
>
> On 10/6/21 1:58 PM, Nicolas Chautru wrote:
> > Adding device information to capture explicitly the assumption of the
> > input/output data byte endianness being processed.
> >
> > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > ---
> > doc/guides/rel_notes/release_21_11.rst | 1 +
> > drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
> > drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
>
> Missed bbdev_null.c
>
> If this was intentional data_endianness is uninitialized or implicitly big endian.
>
> It would be better to say it is unknown. which may mean another enum is
> needed.
I considered this but null driver doesn't touch data, so not relevant.
Still if preferred, Nipin feel free to set it in null_driver as well in your serie (with a comment that it is not relevant).
>
> > drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
> > lib/bbdev/rte_bbdev.h | 8 ++++++++
> > 6 files changed, 13 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > b/doc/guides/rel_notes/release_21_11.rst
> > index a8900a3..f0b3006 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -191,6 +191,7 @@ API Changes
> >
> > * bbdev: Added capability related to more comprehensive CRC options.
> >
> > +* bbdev: Added device info related to data byte endianness processing
> assumption.
> >
> > ABI Changes
> > -----------
> > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > index 4e2feef..eb2c6c1 100644
> > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > @@ -1089,6 +1089,7 @@
> > #else
> > dev_info->harq_buffer_size = 0;
> > #endif
> > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > acc100_check_ir(d);
> > }
> >
> > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > index 6485cc8..c7f15c0 100644
> > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > @@ -372,6 +372,7 @@
> > dev_info->default_queue_conf = default_queue_conf;
> > dev_info->capabilities = bbdev_capabilities;
> > dev_info->cpu_flag_reqs = NULL;
> > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> > /* Calculates number of queues assigned to device */
> > dev_info->max_num_queues = 0;
> > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > index 350c424..72e213e 100644
> > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> > dev_info->default_queue_conf = default_queue_conf;
> > dev_info->capabilities = bbdev_capabilities;
> > dev_info->cpu_flag_reqs = NULL;
> > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> > /* Calculates number of queues assigned to device */
> > dev_info->max_num_queues = 0;
> > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > index e1db2bf..0cab91a 100644
> > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> > dev_info->capabilities = bbdev_capabilities;
> > dev_info->min_alignment = 64;
> > dev_info->harq_buffer_size = 0;
> > + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> > rte_bbdev_log_debug("got device info from %u\n", dev->data-
> >dev_id);
> > }
> > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > 3ebf62e..b3f3000 100644
> > --- a/lib/bbdev/rte_bbdev.h
> > +++ b/lib/bbdev/rte_bbdev.h
> > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> > RTE_BBDEV_INITIALIZED
> > };
> >
> > +/** Definitions of device data byte endianness types */ enum
> > +rte_bbdev_endianness {
> > + RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */
> > + RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ };
>
> Could RTE_BIG|LITTLE_ENDIAN be reused ?
I considered this but the usage is different, these are build time #define, and really would bring confusion here.
Note that there are not really the endianness of the system itself but specific to the bbdev data output going through signal processing.
I thought it was more explicit and less confusing this way, feel free to comment back.
Thanks for the comments.
>
> Tom
>
> > +
> > /**
> > * Get the total number of devices that have been successfully initialised.
> > *
> > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> > uint16_t min_alignment;
> > /** HARQ memory available in kB */
> > uint32_t harq_buffer_size;
> > + /** Byte endianness assumption for input/output data */
> > + enum rte_bbdev_endianness data_endianness;
> > /** Default queue configuration used if none is supplied */
> > struct rte_bbdev_queue_conf default_queue_conf;
> > /** Device operation capabilities */
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
2021-10-06 20:58 4% ` Nicolas Chautru
2021-10-07 12:01 0% ` Tom Rix
@ 2021-10-07 13:13 0% ` Akhil Goyal
2021-10-07 15:41 0% ` Chautru, Nicolas
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-07 13:13 UTC (permalink / raw)
To: Nicolas Chautru, dev, nipun.gupta, trix
Cc: thomas, mingshan.zhang, arun.joshi, hemant.agrawal, david.marchand
> Subject: [EXT] [PATCH v9] bbdev: add device info related to data endianness
> assumption
>
Title is too long.
bbdev: add dev info for data endianness
> Adding device information to capture explicitly the assumption
> of the input/output data byte endianness being processed.
>
> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 1 +
> drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
> drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
> drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
> lib/bbdev/rte_bbdev.h | 8 ++++++++
> 6 files changed, 13 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index a8900a3..f0b3006 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -191,6 +191,7 @@ API Changes
>
> * bbdev: Added capability related to more comprehensive CRC options.
>
> +* bbdev: Added device info related to data byte endianness processing
> assumption.
It is not clear from the description or the release notes, what the application
is supposed to do based on the new dev_info field set and how the driver determine
what value to set?
Isn't there a standard from the application stand point that the input/output data
Should be in BE or in LE like in case of IP packets which are always in BE?
I mean why is it dependent on the PMD which is processing it?
Whatever application understands, PMD should comply with that and do internal
Swapping if it does not support it.
Am I missing something?
>
> ABI Changes
> -----------
> diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> b/drivers/baseband/acc100/rte_acc100_pmd.c
> index 4e2feef..eb2c6c1 100644
> --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> @@ -1089,6 +1089,7 @@
> #else
> dev_info->harq_buffer_size = 0;
> #endif
> + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> acc100_check_ir(d);
> }
>
> diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> index 6485cc8..c7f15c0 100644
> --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> @@ -372,6 +372,7 @@
> dev_info->default_queue_conf = default_queue_conf;
> dev_info->capabilities = bbdev_capabilities;
> dev_info->cpu_flag_reqs = NULL;
> + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>
> /* Calculates number of queues assigned to device */
> dev_info->max_num_queues = 0;
> diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> index 350c424..72e213e 100644
> --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> dev_info->default_queue_conf = default_queue_conf;
> dev_info->capabilities = bbdev_capabilities;
> dev_info->cpu_flag_reqs = NULL;
> + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>
> /* Calculates number of queues assigned to device */
> dev_info->max_num_queues = 0;
> diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> index e1db2bf..0cab91a 100644
> --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> dev_info->capabilities = bbdev_capabilities;
> dev_info->min_alignment = 64;
> dev_info->harq_buffer_size = 0;
> + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>
> rte_bbdev_log_debug("got device info from %u\n", dev->data-
> >dev_id);
> }
> diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
> index 3ebf62e..b3f3000 100644
> --- a/lib/bbdev/rte_bbdev.h
> +++ b/lib/bbdev/rte_bbdev.h
> @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> RTE_BBDEV_INITIALIZED
> };
>
> +/** Definitions of device data byte endianness types */
> +enum rte_bbdev_endianness {
> + RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */
> + RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */
> +};
If at all be need this dev_info field,
as Tom suggested we should use RTE_BIG/LITTLE_ENDIAN.
> +
> /**
> * Get the total number of devices that have been successfully initialised.
> *
> @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> uint16_t min_alignment;
> /** HARQ memory available in kB */
> uint32_t harq_buffer_size;
> + /** Byte endianness assumption for input/output data */
> + enum rte_bbdev_endianness data_endianness;
We should define how the input and output data are expected from the app.
If need be, we can define a simple ``bool swap`` instead of an enum.
> /** Default queue configuration used if none is supplied */
> struct rte_bbdev_queue_conf default_queue_conf;
> /** Device operation capabilities */
> --
> 1.8.3.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v9] bbdev: add device info related to data endianness assumption
2021-10-06 20:58 4% ` Nicolas Chautru
@ 2021-10-07 12:01 0% ` Tom Rix
2021-10-07 15:19 0% ` Chautru, Nicolas
2021-10-07 13:13 0% ` [dpdk-dev] [EXT] " Akhil Goyal
1 sibling, 1 reply; 200+ results
From: Tom Rix @ 2021-10-07 12:01 UTC (permalink / raw)
To: Nicolas Chautru, dev, gakhil, nipun.gupta
Cc: thomas, mingshan.zhang, arun.joshi, hemant.agrawal, david.marchand
On 10/6/21 1:58 PM, Nicolas Chautru wrote:
> Adding device information to capture explicitly the assumption
> of the input/output data byte endianness being processed.
>
> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 1 +
> drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
> drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
Missed bbdev_null.c
If this was intentional data_endianness is uninitialized or implicitly
big endian.
It would be better to say it is unknown. which may mean another enum is
needed.
> drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
> lib/bbdev/rte_bbdev.h | 8 ++++++++
> 6 files changed, 13 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index a8900a3..f0b3006 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -191,6 +191,7 @@ API Changes
>
> * bbdev: Added capability related to more comprehensive CRC options.
>
> +* bbdev: Added device info related to data byte endianness processing assumption.
>
> ABI Changes
> -----------
> diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
> index 4e2feef..eb2c6c1 100644
> --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> @@ -1089,6 +1089,7 @@
> #else
> dev_info->harq_buffer_size = 0;
> #endif
> + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> acc100_check_ir(d);
> }
>
> diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> index 6485cc8..c7f15c0 100644
> --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> @@ -372,6 +372,7 @@
> dev_info->default_queue_conf = default_queue_conf;
> dev_info->capabilities = bbdev_capabilities;
> dev_info->cpu_flag_reqs = NULL;
> + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>
> /* Calculates number of queues assigned to device */
> dev_info->max_num_queues = 0;
> diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> index 350c424..72e213e 100644
> --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> dev_info->default_queue_conf = default_queue_conf;
> dev_info->capabilities = bbdev_capabilities;
> dev_info->cpu_flag_reqs = NULL;
> + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>
> /* Calculates number of queues assigned to device */
> dev_info->max_num_queues = 0;
> diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> index e1db2bf..0cab91a 100644
> --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> dev_info->capabilities = bbdev_capabilities;
> dev_info->min_alignment = 64;
> dev_info->harq_buffer_size = 0;
> + dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>
> rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
> }
> diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
> index 3ebf62e..b3f3000 100644
> --- a/lib/bbdev/rte_bbdev.h
> +++ b/lib/bbdev/rte_bbdev.h
> @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> RTE_BBDEV_INITIALIZED
> };
>
> +/** Definitions of device data byte endianness types */
> +enum rte_bbdev_endianness {
> + RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */
> + RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */
> +};
Could RTE_BIG|LITTLE_ENDIAN be reused ?
Tom
> +
> /**
> * Get the total number of devices that have been successfully initialised.
> *
> @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> uint16_t min_alignment;
> /** HARQ memory available in kB */
> uint32_t harq_buffer_size;
> + /** Byte endianness assumption for input/output data */
> + enum rte_bbdev_endianness data_endianness;
> /** Default queue configuration used if none is supplied */
> struct rte_bbdev_queue_conf default_queue_conf;
> /** Device operation capabilities */
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
` (3 preceding siblings ...)
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
@ 2021-10-07 11:27 9% ` Konstantin Ananyev
2021-10-08 18:13 0% ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
2021-10-11 9:22 0% ` Andrew Rybchenko
6 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 +
drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/net/cxgbe/base/adapter.h | 2 +-
drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
drivers/net/netvsc/hn_var.h | 1 +
lib/ethdev/ethdev_driver.h | 148 ++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 143 -----------------
lib/ethdev/version.map | 2 +-
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/metrics/rte_metrics_telemetry.c | 2 +-
13 files changed, 164 insertions(+), 152 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 0874108b1d..4c9751bb1d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -237,6 +237,12 @@ ABI Changes
to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
is used by public inline function ``rte_eth_rx_queue_count``.
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+ private data structures. ``rte_eth_devices[]`` can't be accessed directly
+ by user any more. While it is an ABI breakage, this change is intended
+ to be transparent for both users (no changes in user app is required) and
+ PMD developers (no changes in PMD is required).
+
Known Issues
------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
#include <rte_atomic.h>
#include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_spinlock.h>
#include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
#include <cryptodev_pmd.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_event_crypto_adapter.h>
#include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
#include <rte_mbuf.h>
#include <rte_io.h>
#include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include "../cxgbe_compat.h"
#include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
#include <unistd.h>
#include <stdarg.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_log.h>
#include <rte_eth_ctrl.h>
#include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
*/
#include <rte_eal_paging.h>
+#include <ethdev_driver.h>
/*
* Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index cc2c75261c..a743553d81 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,154 @@
#include <rte_ethdev.h>
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+ struct rte_eth_rxtx_callback *next;
+ union{
+ rte_rx_callback_fn rx;
+ rte_tx_callback_fn tx;
+ } fn;
+ void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+ eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+ eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+ eth_tx_prep_t tx_pkt_prepare;
+ /**< Pointer to PMD transmit prepare function. */
+ eth_rx_queue_count_t rx_queue_count;
+ /**< Get the number of used RX descriptors. */
+ eth_rx_descriptor_status_t rx_descriptor_status;
+ /**< Check the status of a Rx descriptor. */
+ eth_tx_descriptor_status_t tx_descriptor_status;
+ /**< Check the status of a Tx descriptor. */
+
+ /**
+ * points to device data that is shared between
+ * primary and secondary processes.
+ */
+ struct rte_eth_dev_data *data;
+ void *process_private; /**< Pointer to per-process device data. */
+ const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+ struct rte_device *device; /**< Backing device */
+ struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+ /** User application callbacks for NIC interrupts */
+ struct rte_eth_dev_cb_list link_intr_cbs;
+ /**
+ * User-supplied functions called from rx_burst to post-process
+ * received packets before passing them to the user
+ */
+ struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ /**
+ * User-supplied functions called from tx_burst to pre-process
+ * received packets before passing them to the driver for transmission.
+ */
+ struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ enum rte_eth_dev_state state; /**< Flag indicating the port state */
+ void *security_ctx; /**< Context for security ops */
+
+ uint64_t reserved_64s[4]; /**< Reserved for future fields */
+ void *reserved_ptrs[4]; /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+ char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+ void **rx_queues; /**< Array of pointers to RX queues. */
+ void **tx_queues; /**< Array of pointers to TX queues. */
+ uint16_t nb_rx_queues; /**< Number of RX queues. */
+ uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+ struct rte_eth_dev_sriov sriov; /**< SRIOV data */
+
+ void *dev_private;
+ /**< PMD-specific private data.
+ * @see rte_eth_dev_release_port()
+ */
+
+ struct rte_eth_link dev_link; /**< Link-level information & status. */
+ struct rte_eth_conf dev_conf; /**< Configuration applied to device. */
+ uint16_t mtu; /**< Maximum Transmission Unit. */
+ uint32_t min_rx_buf_size;
+ /**< Common RX buffer size handled by all queues. */
+
+ uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+ struct rte_ether_addr *mac_addrs;
+ /**< Device Ethernet link address.
+ * @see rte_eth_dev_release_port()
+ */
+ uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+ /**< Bitmap associating MAC addresses to pools. */
+ struct rte_ether_addr *hash_mac_addrs;
+ /**< Device Ethernet MAC addresses of hash filtering.
+ * @see rte_eth_dev_release_port()
+ */
+ uint16_t port_id; /**< Device [external] port identifier. */
+
+ __extension__
+ uint8_t promiscuous : 1,
+ /**< RX promiscuous mode ON(1) / OFF(0). */
+ scattered_rx : 1,
+ /**< RX of scattered packets is ON(1) / OFF(0) */
+ all_multicast : 1,
+ /**< RX all multicast mode ON(1) / OFF(0). */
+ dev_started : 1,
+ /**< Device state: STARTED(1) / STOPPED(0). */
+ lro : 1,
+ /**< RX LRO is ON(1) / OFF(0) */
+ dev_configured : 1;
+ /**< Indicates whether the device is configured.
+ * CONFIGURED(1) / NOT CONFIGURED(0).
+ */
+ uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+ /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+ uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+ /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+ uint32_t dev_flags; /**< Capabilities. */
+ int numa_node; /**< NUMA node connection. */
+ struct rte_vlan_filter_conf vlan_filter_conf;
+ /**< VLAN filter configuration. */
+ struct rte_eth_dev_owner owner; /**< The port owner. */
+ uint16_t representor_id;
+ /**< Switch-specific identifier.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
+
+ pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+ uint64_t reserved_64s[4]; /**< Reserved for future fields */
+ void *reserved_ptrs[4]; /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
/**< @internal Declaration of the hairpin peer queue information structure. */
struct rte_hairpin_peer_info;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d5853dff86..d0017bbe05 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -105,147 +105,4 @@ struct rte_eth_fp_ops {
extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
- struct rte_eth_rxtx_callback *next;
- union{
- rte_rx_callback_fn rx;
- rte_tx_callback_fn tx;
- } fn;
- void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
- eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
- eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
- eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
- eth_rx_queue_count_t rx_queue_count; /**< Get the number of used RX descriptors. */
- eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
- eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
- /**
- * Next two fields are per-device data but *data is shared between
- * primary and secondary processes and *process_private is per-process
- * private. The second one is managed by PMDs if necessary.
- */
- struct rte_eth_dev_data *data; /**< Pointer to device data. */
- void *process_private; /**< Pointer to per-process device data. */
- const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
- struct rte_device *device; /**< Backing device */
- struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
- /** User application callbacks for NIC interrupts */
- struct rte_eth_dev_cb_list link_intr_cbs;
- /**
- * User-supplied functions called from rx_burst to post-process
- * received packets before passing them to the user
- */
- struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
- /**
- * User-supplied functions called from tx_burst to pre-process
- * received packets before passing them to the driver for transmission.
- */
- struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
- enum rte_eth_dev_state state; /**< Flag indicating the port state */
- void *security_ctx; /**< Context for security ops */
-
- uint64_t reserved_64s[4]; /**< Reserved for future fields */
- void *reserved_ptrs[4]; /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
- char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
- void **rx_queues; /**< Array of pointers to RX queues. */
- void **tx_queues; /**< Array of pointers to TX queues. */
- uint16_t nb_rx_queues; /**< Number of RX queues. */
- uint16_t nb_tx_queues; /**< Number of TX queues. */
-
- struct rte_eth_dev_sriov sriov; /**< SRIOV data */
-
- void *dev_private;
- /**< PMD-specific private data.
- * @see rte_eth_dev_release_port()
- */
-
- struct rte_eth_link dev_link; /**< Link-level information & status. */
- struct rte_eth_conf dev_conf; /**< Configuration applied to device. */
- uint16_t mtu; /**< Maximum Transmission Unit. */
- uint32_t min_rx_buf_size;
- /**< Common RX buffer size handled by all queues. */
-
- uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
- struct rte_ether_addr *mac_addrs;
- /**< Device Ethernet link address.
- * @see rte_eth_dev_release_port()
- */
- uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
- /**< Bitmap associating MAC addresses to pools. */
- struct rte_ether_addr *hash_mac_addrs;
- /**< Device Ethernet MAC addresses of hash filtering.
- * @see rte_eth_dev_release_port()
- */
- uint16_t port_id; /**< Device [external] port identifier. */
-
- __extension__
- uint8_t promiscuous : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
- scattered_rx : 1, /**< RX of scattered packets is ON(1) / OFF(0) */
- all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
- dev_started : 1, /**< Device state: STARTED(1) / STOPPED(0). */
- lro : 1, /**< RX LRO is ON(1) / OFF(0) */
- dev_configured : 1;
- /**< Indicates whether the device is configured.
- * CONFIGURED(1) / NOT CONFIGURED(0).
- */
- uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
- /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
- uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
- /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
- uint32_t dev_flags; /**< Capabilities. */
- int numa_node; /**< NUMA node connection. */
- struct rte_vlan_filter_conf vlan_filter_conf;
- /**< VLAN filter configuration. */
- struct rte_eth_dev_owner owner; /**< The port owner. */
- uint16_t representor_id;
- /**< Switch-specific identifier.
- * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
- */
-
- pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
- uint64_t reserved_64s[4]; /**< Reserved for future fields */
- void *reserved_ptrs[4]; /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
#endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 2bad712958..cfe58e519d 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -73,7 +73,6 @@ DPDK_22 {
rte_eth_dev_udp_tunnel_port_add;
rte_eth_dev_udp_tunnel_port_delete;
rte_eth_dev_vlan_filter;
- rte_eth_devices;
rte_eth_find_next;
rte_eth_find_next_of;
rte_eth_find_next_owned_by;
@@ -270,6 +269,7 @@ INTERNAL {
rte_eth_dev_release_port;
rte_eth_dev_internal_reset;
rte_eth_devargs_parse;
+ rte_eth_devices;
rte_eth_dma_zone_free;
rte_eth_dma_zone_reserve;
rte_eth_hairpin_queue_peer_bind;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
#include <rte_common.h>
#include <rte_dev.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_log.h>
#include <rte_malloc.h>
#include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
*/
#include <rte_spinlock.h>
#include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include "eventdev_pmd.h"
#include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
#include <rte_telemetry.h>
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef613..5be21b2e86 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
* Copyright(c) 2020 Intel Corporation
*/
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_string_fns.h>
#ifdef RTE_LIB_TELEMETRY
#include <telemetry_internal.h>
--
2.26.3
^ permalink raw reply [relevance 9%]
* [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
` (2 preceding siblings ...)
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
@ 2021-10-07 11:27 2% ` Konstantin Ananyev
2021-10-11 9:02 0% ` Andrew Rybchenko
2021-10-07 11:27 9% ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
` (2 subsequent siblings)
6 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Rework fast-path ethdev functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/ethdev/ethdev_private.c | 31 +++++
lib/ethdev/rte_ethdev.h | 242 ++++++++++++++++++++++++++----------
lib/ethdev/version.map | 3 +
3 files changed, 208 insertions(+), 68 deletions(-)
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 3eeda6e9f9..1222c6f84e 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->txq.data = dev->data->tx_queues;
fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
}
+
+uint16_t
+rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+ void *opaque)
+{
+ const struct rte_eth_rxtx_callback *cb = opaque;
+
+ while (cb != NULL) {
+ nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+ nb_pkts, cb->param);
+ cb = cb->next;
+ }
+
+ return nb_rx;
+}
+
+uint16_t
+rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+ const struct rte_eth_rxtx_callback *cb = opaque;
+
+ while (cb != NULL) {
+ nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+ cb->param);
+ cb = cb->next;
+ }
+
+ return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index cdd16d6e57..c0e1a40681 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
#include <rte_ethdev_core.h>
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ * The address of an array of pointers to *rte_mbuf* structures that
+ * have been retrieved from the device.
+ * @param nb_pkts
+ * The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ * The number of elements in *rx_pkts* array.
+ * @param opaque
+ * Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ * The number of packets effectively supplied to the *rx_pkts* array.
+ */
+uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+ void *opaque);
+
/**
*
* Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4995,23 +5022,37 @@ static inline uint16_t
rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
{
- struct rte_eth_dev *dev = &rte_eth_devices[port_id];
uint16_t nb_rx;
+ struct rte_eth_fp_ops *p;
+ void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_RX
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
- if (queue_id >= dev->data->nb_rx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
return 0;
}
#endif
- nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
- rx_pkts, nb_pkts);
+
+ nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
#ifdef RTE_ETHDEV_RXTX_CALLBACKS
- struct rte_eth_rxtx_callback *cb;
/* __ATOMIC_RELEASE memory order was used when the
* call back was inserted into the list.
@@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
* cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
* not required.
*/
- cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
- __ATOMIC_RELAXED);
-
- if (unlikely(cb != NULL)) {
- do {
- nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
- nb_pkts, cb->param);
- cb = cb->next;
- } while (cb != NULL);
- }
+ cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+ if (unlikely(cb != NULL))
+ nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
+ nb_rx, nb_pkts, cb);
#endif
rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
static inline int
rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
{
- struct rte_eth_dev *dev;
+ struct rte_eth_fp_ops *p;
+ void *qd;
+
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
- dev = &rte_eth_devices[port_id];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
- if (queue_id >= dev->data->nb_rx_queues ||
- dev->data->rx_queues[queue_id] == NULL)
+ RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+ if (qd == NULL)
return -EINVAL;
- return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+ return (int)(*p->rx_queue_count)(qd);
}
/**@{@name Rx hardware descriptor states
@@ -5108,21 +5154,30 @@ static inline int
rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
uint16_t offset)
{
- struct rte_eth_dev *dev;
- void *rxq;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_RX
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
#endif
- dev = &rte_eth_devices[port_id];
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
+
#ifdef RTE_ETHDEV_DEBUG_RX
- if (queue_id >= dev->data->nb_rx_queues)
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (qd == NULL)
return -ENODEV;
#endif
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
- rxq = dev->data->rx_queues[queue_id];
-
- return (*dev->rx_descriptor_status)(rxq, offset);
+ RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+ return (*p->rx_descriptor_status)(qd, offset);
}
/**@{@name Tx hardware descriptor states
@@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
uint16_t queue_id, uint16_t offset)
{
- struct rte_eth_dev *dev;
- void *txq;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_TX
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
#endif
- dev = &rte_eth_devices[port_id];
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
+
#ifdef RTE_ETHDEV_DEBUG_TX
- if (queue_id >= dev->data->nb_tx_queues)
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (qd == NULL)
return -ENODEV;
#endif
- RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
- txq = dev->data->tx_queues[queue_id];
-
- return (*dev->tx_descriptor_status)(txq, offset);
+ RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+ return (*p->tx_descriptor_status)(qd, offset);
}
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The index of the transmit queue through which output packets must be
+ * sent.
+ * @param tx_pkts
+ * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ * which contain the output packets.
+ * @param nb_pkts
+ * The maximum number of packets to transmit.
+ * @return
+ * The number of output packets to transmit.
+ */
+uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
/**
* Send a burst of output packets on a transmit queue of an Ethernet device.
*
@@ -5256,20 +5342,34 @@ static inline uint16_t
rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct rte_eth_fp_ops *p;
+ void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_TX
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
- if (queue_id >= dev->data->nb_tx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
return 0;
}
#endif
#ifdef RTE_ETHDEV_RXTX_CALLBACKS
- struct rte_eth_rxtx_callback *cb;
/* __ATOMIC_RELEASE memory order was used when the
* call back was inserted into the list.
@@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
* cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
* not required.
*/
- cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
- __ATOMIC_RELAXED);
-
- if (unlikely(cb != NULL)) {
- do {
- nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
- cb->param);
- cb = cb->next;
- } while (cb != NULL);
- }
+ cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+ if (unlikely(cb != NULL))
+ nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
+ nb_pkts, cb);
#endif
- rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
- nb_pkts);
- return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+ nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+ rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+ return nb_pkts;
}
/**
@@ -5354,31 +5449,42 @@ static inline uint16_t
rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct rte_eth_dev *dev;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_TX
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
rte_errno = ENODEV;
return 0;
}
#endif
- dev = &rte_eth_devices[port_id];
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_TX
- if (queue_id >= dev->data->nb_tx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+ if (!rte_eth_dev_is_valid_port(port_id)) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+ rte_errno = ENODEV;
+ return 0;
+ }
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
rte_errno = EINVAL;
return 0;
}
#endif
- if (!dev->tx_pkt_prepare)
+ if (!p->tx_pkt_prepare)
return nb_pkts;
- return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
- tx_pkts, nb_pkts);
+ return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
}
#else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..79e62dcf61 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -7,6 +7,8 @@ DPDK_22 {
rte_eth_allmulticast_disable;
rte_eth_allmulticast_enable;
rte_eth_allmulticast_get;
+ rte_eth_call_rx_callbacks;
+ rte_eth_call_tx_callbacks;
rte_eth_dev_adjust_nb_rx_tx_desc;
rte_eth_dev_callback_register;
rte_eth_dev_callback_unregister;
@@ -76,6 +78,7 @@ DPDK_22 {
rte_eth_find_next_of;
rte_eth_find_next_owned_by;
rte_eth_find_next_sibling;
+ rte_eth_fp_ops;
rte_eth_iterator_cleanup;
rte_eth_iterator_init;
rte_eth_iterator_next;
--
2.26.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
2021-10-07 11:27 6% ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-07 11:27 2% ` Konstantin Ananyev
2021-10-09 12:05 0% ` fengchengwen
2021-10-11 8:25 0% ` Andrew Rybchenko
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
` (3 subsequent siblings)
6 siblings, 2 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for fast-path calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
The whole idea beyond this new schema:
1. PMDs keep to setup fast-path function pointers and related data
inside rte_eth_dev struct in the same way they did it before.
2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
(for secondary process) we call eth_dev_fp_ops_setup, which
copies these function and data pointers into rte_eth_fp_ops[port_id].
3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
into some dummy values.
4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
flat array to call PMD specific functions.
That approach should allow us to make rte_eth_devices[] private
without introducing regression and help to avoid changes in drivers code.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++
lib/ethdev/ethdev_private.h | 7 +++++
lib/ethdev/rte_ethdev.c | 27 ++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
4 files changed, 141 insertions(+)
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..3eeda6e9f9 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
return str == NULL ? -1 : 0;
}
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+ __rte_unused struct rte_mbuf **rx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+ __rte_unused struct rte_mbuf **tx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+ static const struct rte_eth_fp_ops dummy_ops = {
+ .rx_pkt_burst = dummy_eth_rx_burst,
+ .tx_pkt_burst = dummy_eth_tx_burst,
+ .rxq = {.data = dummy_data, .clbk = dummy_data,},
+ .txq = {.data = dummy_data, .clbk = dummy_data,},
+ };
+
+ *fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+ const struct rte_eth_dev *dev)
+{
+ fpo->rx_pkt_burst = dev->rx_pkt_burst;
+ fpo->tx_pkt_burst = dev->tx_pkt_burst;
+ fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+ fpo->rx_queue_count = dev->rx_queue_count;
+ fpo->rx_descriptor_status = dev->rx_descriptor_status;
+ fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+ fpo->rxq.data = dev->data->rx_queues;
+ fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+ fpo->txq.data = dev->data->tx_queues;
+ fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..5721be7bdc 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
/* Parse devargs value for representor parameter. */
int rte_eth_devargs_parse_representor_ports(char *str, void *data);
+/* reset eth fast-path API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth fast-path API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+ const struct rte_eth_dev *dev);
+
#endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index c8abda6dd7..9f7a0cbb8c 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
+/* public fast-path API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
/* spinlock for eth device callbacks */
static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
@@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
rte_eth_dev_callback_process(eth_dev,
RTE_ETH_EVENT_DESTROY, NULL);
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
eth_dev->state = RTE_ETH_DEV_UNUSED;
@@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
(*dev->dev_ops->link_update)(dev, 0);
}
+ /* expose selection of PMD fast-path functions */
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
rte_ethdev_trace_start(port_id);
return 0;
}
@@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
return 0;
}
+ /* point fast-path functions to dummy ones */
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
dev->data->dev_started = 0;
ret = (*dev->dev_ops->dev_stop)(dev);
rte_ethdev_trace_stop(port_id, ret);
@@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
}
+RTE_INIT(eth_dev_init_fp_ops)
+{
+ uint32_t i;
+
+ for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
RTE_INIT(eth_dev_init_cb_lists)
{
uint16_t i;
@@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
if (dev == NULL)
return;
+ /*
+ * for secondary process, at that point we expect device
+ * to be already 'usable', so shared data and all function pointers
+ * for fast-path devops have to be setup properly inside rte_eth_dev.
+ */
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
dev->state = RTE_ETH_DEV_ATTACHED;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 51cd68de94..d5853dff86 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
/**< @internal Check the status of a Tx descriptor */
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+ void **data;
+ /**< points to array of internal queue data pointers */
+ void **clbk;
+ /**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * fast-path ethdev functions and related data are hold in a flat array.
+ * One entry per ethdev.
+ * On 64-bit systems contents of this structure occupy exactly two 64B lines.
+ * On 32-bit systems contents of this structure fits into one 64B line.
+ */
+struct rte_eth_fp_ops {
+
+ /**
+ * Rx fast-path functions and related data.
+ * 64-bit systems: occupies first 64B line
+ */
+ eth_rx_burst_t rx_pkt_burst;
+ /**< PMD receive function. */
+ eth_rx_queue_count_t rx_queue_count;
+ /**< Get the number of used RX descriptors. */
+ eth_rx_descriptor_status_t rx_descriptor_status;
+ /**< Check the status of a Rx descriptor. */
+ struct rte_ethdev_qdata rxq;
+ /**< Rx queues data. */
+ uintptr_t reserved1[3];
+
+ /**
+ * Tx fast-path functions and related data.
+ * 64-bit systems: occupies second 64B line
+ */
+ eth_tx_burst_t tx_pkt_burst;
+ /**< PMD transmit function. */
+ eth_tx_prep_t tx_pkt_prepare;
+ /**< PMD transmit prepare function. */
+ eth_tx_descriptor_status_t tx_descriptor_status;
+ /**< Check the status of a Tx descriptor. */
+ struct rte_ethdev_qdata txq;
+ /**< Tx queues data. */
+ uintptr_t reserved2[3];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
/**
* @internal
--
2.26.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
@ 2021-10-07 11:27 6% ` Konstantin Ananyev
2021-10-11 8:06 0% ` Andrew Rybchenko
2021-10-12 17:59 0% ` Hyong Youb Kim (hyonkim)
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
` (4 subsequent siblings)
6 siblings, 2 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 ++++++
drivers/net/ark/ark_ethdev_rx.c | 4 ++--
drivers/net/ark/ark_ethdev_rx.h | 3 +--
drivers/net/atlantic/atl_ethdev.h | 2 +-
drivers/net/atlantic/atl_rxtx.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 8 +++++---
drivers/net/dpaa/dpaa_ethdev.c | 9 ++++-----
drivers/net/dpaa2/dpaa2_ethdev.c | 9 ++++-----
drivers/net/e1000/e1000_ethdev.h | 6 ++----
drivers/net/e1000/em_rxtx.c | 4 ++--
drivers/net/e1000/igb_rxtx.c | 4 ++--
drivers/net/enic/enic_ethdev.c | 12 ++++++------
drivers/net/fm10k/fm10k.h | 2 +-
drivers/net/fm10k/fm10k_rxtx.c | 4 ++--
drivers/net/hns3/hns3_rxtx.c | 7 +++++--
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/i40e/i40e_rxtx.c | 4 ++--
drivers/net/i40e/i40e_rxtx.h | 3 +--
drivers/net/iavf/iavf_rxtx.c | 4 ++--
drivers/net/iavf/iavf_rxtx.h | 2 +-
drivers/net/ice/ice_rxtx.c | 4 ++--
drivers/net/ice/ice_rxtx.h | 2 +-
drivers/net/igc/igc_txrx.c | 5 ++---
drivers/net/igc/igc_txrx.h | 3 +--
drivers/net/ixgbe/ixgbe_ethdev.h | 3 +--
drivers/net/ixgbe/ixgbe_rxtx.c | 4 ++--
drivers/net/mlx5/mlx5_rx.c | 26 ++++++++++++-------------
drivers/net/mlx5/mlx5_rx.h | 2 +-
drivers/net/netvsc/hn_rxtx.c | 4 ++--
drivers/net/netvsc/hn_var.h | 2 +-
drivers/net/nfp/nfp_rxtx.c | 4 ++--
drivers/net/nfp/nfp_rxtx.h | 3 +--
drivers/net/octeontx2/otx2_ethdev.h | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 8 ++++----
drivers/net/sfc/sfc_ethdev.c | 12 ++++++------
drivers/net/thunderx/nicvf_ethdev.c | 3 +--
drivers/net/thunderx/nicvf_rxtx.c | 4 ++--
drivers/net/thunderx/nicvf_rxtx.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.h | 3 +--
drivers/net/txgbe/txgbe_rxtx.c | 4 ++--
drivers/net/vhost/rte_eth_vhost.c | 4 ++--
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_core.h | 3 +--
43 files changed, 103 insertions(+), 110 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 436d29afda..ca5d169598 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -226,6 +226,12 @@ ABI Changes
``rte_security_ipsec_xform`` to allow applications to configure SA soft
and hard expiry limits. Limits can be either in number of packets or bytes.
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+ Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+ to internal queue data as input parameter. While this change is transparent
+ to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+ is used by public inline function ``rte_eth_rx_queue_count``.
+
Known Issues
------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
}
uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
{
struct ark_rx_queue *queue;
- queue = dev->data->rx_queues[queue_id];
+ queue = rx_queue;
return (queue->prod_index - queue->cons_index); /* mod arith */
}
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
/* Return Rx queue avail count */
uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
{
struct atl_rx_queue *rxq;
PMD_INIT_FUNC_TRACE();
- if (rx_queue_id >= dev->data->nb_rx_queues) {
- PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
- return 0;
- }
-
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
if (rxq == NULL)
return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85..72c3d4f0fc 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3150,20 +3150,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
}
static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
{
- struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+ struct bnxt *bp;
struct bnxt_cp_ring_info *cpr;
uint32_t desc = 0, raw_cons, cp_ring_size;
struct bnxt_rx_queue *rxq;
struct rx_pkt_cmpl *rxcmp;
int rc;
+ rxq = rx_queue;
+ bp = rxq->bp;
+
rc = is_bnxt_in_error(bp);
if (rc)
return rc;
- rxq = dev->data->rx_queues[rx_queue_id];
cpr = rxq->cp_ring;
raw_cons = cpr->cp_raw_cons;
cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
}
static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
- struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+ struct qman_fq *rxq = rx_queue;
u32 frm_cnt = 0;
PMD_INIT_FUNC_TRACE();
if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
- DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
- rx_queue_id, frm_cnt);
+ DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+ rx_queue, frm_cnt);
}
return frm_cnt;
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 275656fbe4..43d46b595e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1013,10 +1013,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
}
static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
{
int32_t ret;
- struct dpaa2_dev_priv *priv = dev->data->dev_private;
struct dpaa2_queue *dpaa2_q;
struct qbman_swp *swp;
struct qbman_fq_query_np_rslt state;
@@ -1033,12 +1032,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
swp = DPAA2_PER_LCORE_PORTAL;
- dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+ dpaa2_q = rx_queue;
if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
frame_cnt = qbman_fq_state_frame_count(&state);
- DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
- rx_queue_id, frame_cnt);
+ DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+ rx_queue, frame_cnt);
}
return frame_cnt;
}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index c01e3ee9c5..fff52958df 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset);
int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
@@ -474,8 +473,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset);
int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 048b9148ed..13ea3a77f4 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
{
#define EM_RXQ_SCAN_INTERVAL 4
volatile struct e1000_rx_desc *rxdp;
struct em_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 46a7789d90..0ee1b8d48d 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
{
#define IGB_RXQ_SCAN_INTERVAL 4
volatile union e1000_adv_rx_desc *rxdp;
struct igb_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
enic_free_rq(rxq);
}
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
{
- struct enic *enic = pmd_priv(dev);
+ struct enic *enic;
+ struct vnic_rq *sop_rq;
uint32_t queue_count = 0;
struct vnic_cq *cq;
uint32_t cq_tail;
uint16_t cq_idx;
- int rq_num;
- rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
- cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+ sop_rq = rx_queue;
+ enic = vnic_dev_priv(sop_rq->vdev);
+ cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
cq_idx = cq->to_clean;
cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 2e47ada829..17c73c4dc5 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
int
fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index d9833505d1..b3515ae96a 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
{
#define FM10K_RXQ_SCAN_INTERVAL 4
volatile union fm10k_rx_desc *rxdp;
struct fm10k_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->hw_ring[rxq->next_dd];
while ((desc < rxq->nb_desc) &&
rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
}
uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
{
/*
* Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
*/
uint32_t driver_hold_bd_num;
struct hns3_rx_queue *rxq;
+ const struct rte_eth_dev *dev;
uint32_t fbd_num;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
+ dev = &rte_eth_devices[rxq->port_id];
+
fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
struct rte_mempool *mp);
int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 0fd9fef8e0..f5bebde302 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2107,14 +2107,14 @@ i40e_dev_rx_queue_release(void *rxq)
}
uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
{
#define I40E_RXQ_SCAN_INTERVAL 4
volatile union i40e_rx_desc *rxdp;
struct i40e_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 842924bce5..d495a741b6 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..3dc1f04380 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
/* Get the number of used descriptors of a rx queue */
uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
{
#define IAVF_RXQ_SCAN_INTERVAL 4
volatile union iavf_rx_desc *rxdp;
struct iavf_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 83fb788e69..c92fc5053b 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1444,14 +1444,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
}
uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
{
#define ICE_RXQ_SCAN_INTERVAL 4
volatile union ice_rx_flex_desc *rxdp;
struct ice_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index eef76ffdc5..8d078e0edc 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -226,7 +226,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
void ice_set_tx_function_flag(struct rte_eth_dev *dev,
struct ice_tx_queue *txq);
void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 3979dca660..2498cfd290 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
igc_rx_queue_release(rxq);
}
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
{
/**
* Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
struct igc_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index d6f3799639..3b4c7450cd 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 37976902a1..6b7a4079db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0af9ce8aee..8e056db761 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
{
#define IXGBE_RXQ_SCAN_INTERVAL 4
volatile union ixgbe_adv_rx_desc *rxdp;
struct ixgbe_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
/**
* DPDK callback to get the number of used descriptors in a RX queue.
*
- * @param dev
- * Pointer to the device structure.
- *
- * @param rx_queue_id
- * The Rx queue.
+ * @param rx_queue
+ * The Rx queue pointer.
*
* @return
* The number of used rx descriptor.
* -EINVAL if the queue is invalid
*/
uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq;
+ struct mlx5_rxq_data *rxq = rx_queue;
+ struct rte_eth_dev *dev;
+
+ if (!rxq) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+
+ dev = &rte_eth_devices[rxq->port_id];
if (dev->rx_pkt_burst == NULL ||
dev->rx_pkt_burst == removed_rx_burst) {
rte_errno = ENOTSUP;
return -rte_errno;
}
- rxq = (*priv->rxqs)[rx_queue_id];
- if (!rxq) {
- rte_errno = EINVAL;
- return -rte_errno;
- }
+
return rx_queue_count(rxq);
}
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
* For this device that means how many packets are pending in the ring.
*/
uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
{
- struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+ struct hn_rx_queue *rxq = rx_queue;
return rte_ring_count(rxq->rx_ring);
}
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
void hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
int hn_dev_rx_queue_status(void *rxq, uint16_t offset);
void hn_dev_free_queues(struct rte_eth_dev *dev);
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
}
uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
{
struct nfp_net_rxq *rxq;
struct nfp_net_rx_desc *rxds;
uint32_t idx;
uint32_t count;
- rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+ rxq = rx_queue;
idx = rxq->rd_p;
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
} __rte_aligned(64);
int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 90bafcea8e..d28fcaa281 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5cb3905b64..3a763f691b 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
}
uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_rxq *rxq = rx_queue;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
uint32_t head, tail;
- nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+ nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
return (tail - head) % rxq->qlen;
}
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8debebc96e..c9b01480f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
* use any process-local pointers from the adapter data.
*/
static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
{
- const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
- struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
- sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+ struct sfc_dp_rxq *dp_rxq = rx_queue;
+ const struct sfc_dp_rx *dp_rx;
struct sfc_rxq_info *rxq_info;
- rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+ dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+ rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
return 0;
- return sap->dp_rx->qdesc_npending(rxq_info->dp);
+ return dp_rx->qdesc_npending(dp_rxq);
}
/*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
if (dev->rx_pkt_burst == NULL)
return;
- while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
- nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+ while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
NICVF_MAX_RX_FREE_THRESH);
PMD_DRV_LOG(INFO, "nb_pkts=%d rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
}
uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
{
struct nicvf_rxq *rxq;
- rxq = dev->data->rx_queues[queue_idx];
+ rxq = rx_queue;
return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
*(uint64_t *)(&pkt->rearm_data) = init.value;
}
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
{
#define TXGBE_RXQ_SCAN_INTERVAL 4
volatile struct txgbe_rx_desc *rxdp;
struct txgbe_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
}
static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
{
struct vhost_queue *vq;
- vq = dev->data->rx_queues[rx_queue_id];
+ vq = rx_queue;
if (vq == NULL)
return 0;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 0bff526819..cdd16d6e57 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
dev->data->rx_queues[queue_id] == NULL)
return -EINVAL;
- return (int)(*dev->rx_queue_count)(dev, queue_id);
+ return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
}
/**@{@name Rx hardware descriptor states
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 2296872888..51cd68de94 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
/**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
/**< @internal Get number of used descriptors on a receive queue. */
typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
--
2.26.3
^ permalink raw reply [relevance 6%]
* [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
2021-10-04 13:55 4% ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
` (4 preceding siblings ...)
2021-10-06 16:42 0% ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
@ 2021-10-07 11:27 4% ` Konstantin Ananyev
` (6 more replies)
5 siblings, 7 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
v5 changes:
- Fix spelling (Thomas/David)
- Rename internal helper functions (David)
- Reorder patches and update commit messages (Thomas)
- Update comments (Thomas)
- Changed layout in rte_eth_fp_ops, to group functions and
related data based on their functionality:
first 64B line for Rx, second one for Tx.
Didn't observe any real performance difference comparing to
original layout. Though decided to keep a new one, as it seems
a bit more plausible.
v4 changes:
- Fix secondary process attach (Pavan)
- Fix build failure (Ferruh)
- Update lib/ethdev/verion.map (Ferruh)
Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
section makes checkpatch.sh to complain.
v3 changes:
- Changes in public struct naming (Jerin/Haiyue)
- Split patches
- Update docs
- Shamelessly included Andrew's patch:
https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
into these series.
I have to do similar thing here, so decided to avoid duplicated effort.
The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.
The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
related data pointer from rte_eth_dev into a separate flat array.
We keep it public to still be able to use inline functions for these
'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
Note that apart from function pointers itself, each element of this
flat array also contains two opaque pointers for each ethdev:
1) a pointer to an array of internal queue data pointers
2) points to array of queue callback data pointers.
Note that exposing this extra information allows us to avoid extra
changes inside PMD level, plus should help to avoid possible
performance degradation.
2. Change implementation of 'fast' inline ethdev functions
(rte_eth_rx_burst(), etc.) to use new public flat array.
While it is an ABI breakage, this change is intended to be transparent
for both users (no changes in user app is required) and PMD developers
(no changes in PMD is required).
One extra note - with new implementation RX/TX callback invocation
will cost one extra function call with this changes. That might cause
some slowdown for code-path with RX/TX callbacks heavily involved.
Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
things into internal header: <ethdev_driver.h>.
That approach was selected to:
- Avoid(/minimize) possible performance losses.
- Minimize required changes inside PMDs.
Performance testing results (ICX 2.0GHz, E810 (ice)):
- testpmd macswap fwd mode, plus
a) no RX/TX callbacks:
no actual slowdown observed
b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
~2% slowdown
- l3fwd: no actual slowdown observed
Would like to thank everyone who already reviewed and tested previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.
Andrew Rybchenko (1):
ethdev: remove legacy Rx descriptor done API
Konstantin Ananyev (6):
ethdev: allocate max space for internal queue array
ethdev: change input parameters for rx_queue_count
ethdev: copy fast-path API into separate structure
ethdev: make fast-path functions to use new flat array
ethdev: add API to retrieve multiple ethernet addresses
ethdev: hide eth dev related structures
app/test-pmd/config.c | 23 +-
doc/guides/nics/features.rst | 6 +-
doc/guides/rel_notes/deprecation.rst | 5 -
doc/guides/rel_notes/release_21_11.rst | 21 ++
drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/net/ark/ark_ethdev_rx.c | 4 +-
drivers/net/ark/ark_ethdev_rx.h | 3 +-
drivers/net/atlantic/atl_ethdev.h | 2 +-
drivers/net/atlantic/atl_rxtx.c | 9 +-
drivers/net/bnxt/bnxt_ethdev.c | 8 +-
drivers/net/cxgbe/base/adapter.h | 2 +-
drivers/net/dpaa/dpaa_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
drivers/net/e1000/e1000_ethdev.h | 10 +-
drivers/net/e1000/em_ethdev.c | 1 -
drivers/net/e1000/em_rxtx.c | 21 +-
drivers/net/e1000/igb_ethdev.c | 2 -
drivers/net/e1000/igb_rxtx.c | 21 +-
drivers/net/enic/enic_ethdev.c | 12 +-
drivers/net/fm10k/fm10k.h | 5 +-
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/fm10k/fm10k_rxtx.c | 29 +-
drivers/net/hns3/hns3_rxtx.c | 7 +-
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_rxtx.c | 30 +-
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/iavf/iavf_rxtx.c | 4 +-
drivers/net/iavf/iavf_rxtx.h | 2 +-
drivers/net/ice/ice_rxtx.c | 4 +-
drivers/net/ice/ice_rxtx.h | 2 +-
drivers/net/igc/igc_ethdev.c | 1 -
drivers/net/igc/igc_txrx.c | 23 +-
drivers/net/igc/igc_txrx.h | 5 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 2 -
drivers/net/ixgbe/ixgbe_ethdev.h | 5 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +-
drivers/net/mlx5/mlx5_rx.c | 26 +-
drivers/net/mlx5/mlx5_rx.h | 2 +-
drivers/net/netvsc/hn_rxtx.c | 4 +-
drivers/net/netvsc/hn_var.h | 3 +-
drivers/net/nfp/nfp_rxtx.c | 4 +-
drivers/net/nfp/nfp_rxtx.h | 3 +-
drivers/net/octeontx2/otx2_ethdev.c | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 3 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 20 +-
drivers/net/sfc/sfc_ethdev.c | 29 +-
drivers/net/thunderx/nicvf_ethdev.c | 3 +-
drivers/net/thunderx/nicvf_rxtx.c | 4 +-
drivers/net/thunderx/nicvf_rxtx.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.h | 3 +-
drivers/net/txgbe/txgbe_rxtx.c | 4 +-
drivers/net/vhost/rte_eth_vhost.c | 4 +-
drivers/net/virtio/virtio_ethdev.c | 1 -
lib/ethdev/ethdev_driver.h | 148 +++++++++
lib/ethdev/ethdev_private.c | 83 +++++
lib/ethdev/ethdev_private.h | 7 +
lib/ethdev/rte_ethdev.c | 89 ++++--
lib/ethdev/rte_ethdev.h | 288 ++++++++++++------
lib/ethdev/rte_ethdev_core.h | 171 +++--------
lib/ethdev/version.map | 8 +-
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/metrics/rte_metrics_telemetry.c | 2 +-
67 files changed, 677 insertions(+), 564 deletions(-)
--
2.26.3
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v9 1/8] bbdev: add device info related to data endianness assumption
@ 2021-10-07 9:33 4% ` nipun.gupta
0 siblings, 0 replies; 200+ results
From: nipun.gupta @ 2021-10-07 9:33 UTC (permalink / raw)
To: dev, gakhil, nicolas.chautru; +Cc: david.marchand, hemant.agrawal
From: Nicolas Chautru <nicolas.chautru@intel.com>
Adding device information to capture explicitly the assumption
of the input/output data byte endianness being processed.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 1 +
drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
lib/bbdev/rte_bbdev.h | 8 ++++++++
6 files changed, 13 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..a991f01bf3 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -187,6 +187,7 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* bbdev: Added device info related to data byte endianness processing assumption.
ABI Changes
-----------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..5c3901c9ca 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -1088,6 +1088,7 @@ acc100_dev_info_get(struct rte_bbdev *dev,
#else
dev_info->harq_buffer_size = 0;
#endif
+ dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
acc100_check_ir(d);
}
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..c7f15c0bfc 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -372,6 +372,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
dev_info->default_queue_conf = default_queue_conf;
dev_info->capabilities = bbdev_capabilities;
dev_info->cpu_flag_reqs = NULL;
+ dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
/* Calculates number of queues assigned to device */
dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..72e213ed9a 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -644,6 +644,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
dev_info->default_queue_conf = default_queue_conf;
dev_info->capabilities = bbdev_capabilities;
dev_info->cpu_flag_reqs = NULL;
+ dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
/* Calculates number of queues assigned to device */
dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index 77e9a2ecbc..193f701028 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -251,6 +251,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
dev_info->capabilities = bbdev_capabilities;
dev_info->min_alignment = 64;
dev_info->harq_buffer_size = 0;
+ dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
}
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 3ebf62e697..b3f30002cd 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -49,6 +49,12 @@ enum rte_bbdev_state {
RTE_BBDEV_INITIALIZED
};
+/** Definitions of device data byte endianness types */
+enum rte_bbdev_endianness {
+ RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */
+ RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */
+};
+
/**
* Get the total number of devices that have been successfully initialised.
*
@@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
uint16_t min_alignment;
/** HARQ memory available in kB */
uint32_t harq_buffer_size;
+ /** Byte endianness assumption for input/output data */
+ enum rte_bbdev_endianness data_endianness;
/** Default queue configuration used if none is supplied */
struct rte_bbdev_queue_conf default_queue_conf;
/** Device operation capabilities */
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v9] bbdev: add device info related to data endianness assumption
@ 2021-10-06 20:58 4% ` Nicolas Chautru
2021-10-07 12:01 0% ` Tom Rix
2021-10-07 13:13 0% ` [dpdk-dev] [EXT] " Akhil Goyal
0 siblings, 2 replies; 200+ results
From: Nicolas Chautru @ 2021-10-06 20:58 UTC (permalink / raw)
To: dev, gakhil, nipun.gupta, trix
Cc: thomas, mingshan.zhang, arun.joshi, hemant.agrawal,
david.marchand, Nicolas Chautru
Adding device information to capture explicitly the assumption
of the input/output data byte endianness being processed.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 1 +
drivers/baseband/acc100/rte_acc100_pmd.c | 1 +
drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 1 +
drivers/baseband/turbo_sw/bbdev_turbo_software.c | 1 +
lib/bbdev/rte_bbdev.h | 8 ++++++++
6 files changed, 13 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index a8900a3..f0b3006 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -191,6 +191,7 @@ API Changes
* bbdev: Added capability related to more comprehensive CRC options.
+* bbdev: Added device info related to data byte endianness processing assumption.
ABI Changes
-----------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 4e2feef..eb2c6c1 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -1089,6 +1089,7 @@
#else
dev_info->harq_buffer_size = 0;
#endif
+ dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
acc100_check_ir(d);
}
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc8..c7f15c0 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -372,6 +372,7 @@
dev_info->default_queue_conf = default_queue_conf;
dev_info->capabilities = bbdev_capabilities;
dev_info->cpu_flag_reqs = NULL;
+ dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
/* Calculates number of queues assigned to device */
dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c424..72e213e 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
dev_info->default_queue_conf = default_queue_conf;
dev_info->capabilities = bbdev_capabilities;
dev_info->cpu_flag_reqs = NULL;
+ dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
/* Calculates number of queues assigned to device */
dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index e1db2bf..0cab91a 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -253,6 +253,7 @@ struct turbo_sw_queue {
dev_info->capabilities = bbdev_capabilities;
dev_info->min_alignment = 64;
dev_info->harq_buffer_size = 0;
+ dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
}
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 3ebf62e..b3f3000 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -49,6 +49,12 @@ enum rte_bbdev_state {
RTE_BBDEV_INITIALIZED
};
+/** Definitions of device data byte endianness types */
+enum rte_bbdev_endianness {
+ RTE_BBDEV_BIG_ENDIAN, /**< Data with byte-endianness BE */
+ RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */
+};
+
/**
* Get the total number of devices that have been successfully initialised.
*
@@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
uint16_t min_alignment;
/** HARQ memory available in kB */
uint32_t harq_buffer_size;
+ /** Byte endianness assumption for input/output data */
+ enum rte_bbdev_endianness data_endianness;
/** Default queue configuration used if none is supplied */
struct rte_bbdev_queue_conf default_queue_conf;
/** Device operation capabilities */
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
2021-10-06 16:42 0% ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
@ 2021-10-06 17:26 0% ` Ali Alnubani
0 siblings, 0 replies; 200+ results
From: Ali Alnubani @ 2021-10-06 17:26 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
Matan Azrad, Slava Ovsiienko, sthemmin, NBU-Contact-longli,
heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
jianwang, maxime.coquelin, chenbo.xia,
NBU-Contact-Thomas Monjalon, ferruh.yigit, mdr,
jay.jayatheerthan
> -----Original Message-----
> From: Ali Alnubani
> Sent: Wednesday, October 6, 2021 7:43 PM
> To: Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com
> Subject: RE: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Konstantin Ananyev
> > Sent: Monday, October 4, 2021 4:56 PM
> > To: dev@dpdk.org
> > Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> > ndabilpuram@marvell.com; adwivedi@marvell.com;
> > shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> > john.miller@atomicrules.com; irusskikh@marvell.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> > rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> > sachin.saxena@oss.nxp.com; haiyue.wang@intel.com;
> johndale@cisco.com;
> > hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> > humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> > beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> > Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> > <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> > kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> > mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com;
> > maxime.coquelin@redhat.com; chenbo.xia@intel.com; NBU-Contact-
> Thomas
> > Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> mdr@ashroe.eu;
> > jay.jayatheerthan@intel.com; Konstantin Ananyev
> > <konstantin.ananyev@intel.com>
> > Subject: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
> >
> > v4 changes:
> > - Fix secondary process attach (Pavan)
> > - Fix build failure (Ferruh)
> > - Update lib/ethdev/verion.map (Ferruh)
> > Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
> > section makes checkpatch.sh to complain.
> >
> > v3 changes:
> > - Changes in public struct naming (Jerin/Haiyue)
> > - Split patches
> > - Update docs
> > - Shamelessly included Andrew's patch:
> >
> > https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-
> > andrew.rybchenko@oktetlabs.ru/
> > into these series.
> > I have to do similar thing here, so decided to avoid duplicated effort.
> >
> > The aim of these patch series is to make rte_ethdev core data
> > structures (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback,
> > etc.) internal to DPDK and not visible to the user.
> > That should allow future possible changes to core ethdev related
> > structures to be transparent to the user and help to improve ABI/API
> stability.
> > Note that current ethdev API is preserved, but it is a formal ABI break.
> >
> > The work is based on previous discussions at:
> > https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> > https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> > and consists of the following main points:
> > 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> > related data pointer from rte_eth_dev into a separate flat array.
> > We keep it public to still be able to use inline functions for these
> > 'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > Note that apart from function pointers itself, each element of this
> > flat array also contains two opaque pointers for each ethdev:
> > 1) a pointer to an array of internal queue data pointers
> > 2) points to array of queue callback data pointers.
> > Note that exposing this extra information allows us to avoid extra
> > changes inside PMD level, plus should help to avoid possible
> > performance degradation.
> > 2. Change implementation of 'fast' inline ethdev functions
> > (rte_eth_rx_burst(), etc.) to use new public flat array.
> > While it is an ABI breakage, this change is intended to be transparent
> > for both users (no changes in user app is required) and PMD developers
> > (no changes in PMD is required).
> > One extra note - with new implementation RX/TX callback invocation
> > will cost one extra function call with this changes. That might cause
> > some slowdown for code-path with RX/TX callbacks heavily involved.
> > Hope such trade-off is acceptable for the community.
> > 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and
> related
> > things into internal header: <ethdev_driver.h>.
> >
> > That approach was selected to:
> > - Avoid(/minimize) possible performance losses.
> > - Minimize required changes inside PMDs.
> >
> > Performance testing results (ICX 2.0GHz, E810 (ice)):
> > - testpmd macswap fwd mode, plus
> > a) no RX/TX callbacks:
> > no actual slowdown observed
> > b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> > ~2% slowdown
> > - l3fwd: no actual slowdown observed
> >
> > Would like to thank everyone who already reviewed and tested previous
> > versions of these series. All other interested parties please don't be
> > shy and provide your feedback.
> >
> > Konstantin Ananyev (7):
> > ethdev: allocate max space for internal queue array
> > ethdev: change input parameters for rx_queue_count
> > ethdev: copy ethdev 'fast' API into separate structure
> > ethdev: make burst functions to use new flat array
> > ethdev: add API to retrieve multiple ethernet addresses
> > ethdev: remove legacy Rx descriptor done API
> > ethdev: hide eth dev related structures
> >
>
> Tested single and multi-core packet forwarding performance with testpmd
> on both ConnectX-5 and ConnectX-6 Dx.
>
I should have mentioned that I didn't see any noticeable regressions with the cases I mentioned.
> Tested-by: Ali Alnubani <alialnu@nvidia.com>
>
> Thanks,
> Ali
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
2021-10-04 13:55 4% ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
` (3 preceding siblings ...)
2021-10-04 13:56 9% ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-06 16:42 0% ` Ali Alnubani
2021-10-06 17:26 0% ` Ali Alnubani
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
5 siblings, 1 reply; 200+ results
From: Ali Alnubani @ 2021-10-06 16:42 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
Matan Azrad, Slava Ovsiienko, sthemmin, NBU-Contact-longli,
heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
jianwang, maxime.coquelin, chenbo.xia,
NBU-Contact-Thomas Monjalon, ferruh.yigit, mdr,
jay.jayatheerthan
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Konstantin Ananyev
> Sent: Monday, October 4, 2021 4:56 PM
> To: dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com; Konstantin Ananyev
> <konstantin.ananyev@intel.com>
> Subject: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
>
> v4 changes:
> - Fix secondary process attach (Pavan)
> - Fix build failure (Ferruh)
> - Update lib/ethdev/verion.map (Ferruh)
> Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
> section makes checkpatch.sh to complain.
>
> v3 changes:
> - Changes in public struct naming (Jerin/Haiyue)
> - Split patches
> - Update docs
> - Shamelessly included Andrew's patch:
> https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-
> andrew.rybchenko@oktetlabs.ru/
> into these series.
> I have to do similar thing here, so decided to avoid duplicated effort.
>
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
>
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> related data pointer from rte_eth_dev into a separate flat array.
> We keep it public to still be able to use inline functions for these
> 'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> Note that apart from function pointers itself, each element of this
> flat array also contains two opaque pointers for each ethdev:
> 1) a pointer to an array of internal queue data pointers
> 2) points to array of queue callback data pointers.
> Note that exposing this extra information allows us to avoid extra
> changes inside PMD level, plus should help to avoid possible
> performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
> (rte_eth_rx_burst(), etc.) to use new public flat array.
> While it is an ABI breakage, this change is intended to be transparent
> for both users (no changes in user app is required) and PMD developers
> (no changes in PMD is required).
> One extra note - with new implementation RX/TX callback invocation
> will cost one extra function call with this changes. That might cause
> some slowdown for code-path with RX/TX callbacks heavily involved.
> Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> things into internal header: <ethdev_driver.h>.
>
> That approach was selected to:
> - Avoid(/minimize) possible performance losses.
> - Minimize required changes inside PMDs.
>
> Performance testing results (ICX 2.0GHz, E810 (ice)):
> - testpmd macswap fwd mode, plus
> a) no RX/TX callbacks:
> no actual slowdown observed
> b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> ~2% slowdown
> - l3fwd: no actual slowdown observed
>
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy and
> provide your feedback.
>
> Konstantin Ananyev (7):
> ethdev: allocate max space for internal queue array
> ethdev: change input parameters for rx_queue_count
> ethdev: copy ethdev 'fast' API into separate structure
> ethdev: make burst functions to use new flat array
> ethdev: add API to retrieve multiple ethernet addresses
> ethdev: remove legacy Rx descriptor done API
> ethdev: hide eth dev related structures
>
Tested single and multi-core packet forwarding performance with testpmd on both ConnectX-5 and ConnectX-6 Dx.
Tested-by: Ali Alnubani <alialnu@nvidia.com>
Thanks,
Ali
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 14/14] eventdev: mark trace variables as internal
@ 2021-10-06 7:11 5% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-06 7:11 UTC (permalink / raw)
To: Pavan Nikhilesh, Ray Kinsella; +Cc: Jerin Jacob Kollanukkaran, dev
Hello Pavan, Ray,
On Wed, Oct 6, 2021 at 8:52 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Mark rte_trace global variables as internal i.e. remove them
> from experimental section of version map.
> Some of them are used in inline APIs, mark those as global.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
Please, sort those symbols.
I check with ./devtools/update-abi.sh $(cat ABI_VERSION)
> ---
> lib/eventdev/version.map | 77 ++++++++++++++++++----------------------
> 1 file changed, 35 insertions(+), 42 deletions(-)
>
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index 068d186c66..617fff0ae6 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -88,57 +88,19 @@ DPDK_22 {
> rte_event_vector_pool_create;
> rte_eventdevs;
>
> - #added in 21.11
> - rte_event_fp_ops;
> -
> - local: *;
> -};
> -
> -EXPERIMENTAL {
> - global:
> -
> # added in 20.05
At the next ABI bump, ./devtools/update-abi.sh will strip those
comments from the stable section.
You can notice this when you run ./devtools/update-abi.sh $CURRENT_ABI
as suggested above.
I would strip the comments now that the symbols are going to stable.
Ray, do you have an opinion?
--
David Marchand
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v3 04/14] eventdev: move inline APIs into separate structure
@ 2021-10-06 6:50 2% ` pbhagavatula
1 sibling, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-06 6:50 UTC (permalink / raw)
To: jerinj, Ray Kinsella; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intension is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
lib/eventdev/eventdev_pmd.h | 38 +++++++++++
lib/eventdev/eventdev_pmd_pci.h | 4 +-
lib/eventdev/eventdev_private.c | 112 +++++++++++++++++++++++++++++++
lib/eventdev/meson.build | 1 +
lib/eventdev/rte_eventdev.c | 22 +++++-
lib/eventdev/rte_eventdev_core.h | 28 ++++++++
lib/eventdev/version.map | 6 ++
7 files changed, 209 insertions(+), 2 deletions(-)
create mode 100644 lib/eventdev/eventdev_private.c
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 7eb2aa0520..b188280778 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -1189,4 +1189,42 @@ __rte_internal
int
rte_event_pmd_release(struct rte_eventdev *eventdev);
+/**
+ *
+ * @internal
+ * This is the last step of device probing.
+ * It must be called after a port is allocated and initialized successfully.
+ *
+ * @param eventdev
+ * New event device.
+ */
+__rte_internal
+void
+event_dev_probing_finish(struct rte_eventdev *eventdev);
+
+/**
+ * Reset eventdevice fastpath APIs to dummy values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to reset.
+ */
+__rte_internal
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op);
+
+/**
+ * Set eventdevice fastpath APIs to event device values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to set.
+ */
+__rte_internal
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_ops,
+ const struct rte_eventdev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
#endif /* _RTE_EVENTDEV_PMD_H_ */
diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h
index 2f12a5eb24..499852db16 100644
--- a/lib/eventdev/eventdev_pmd_pci.h
+++ b/lib/eventdev/eventdev_pmd_pci.h
@@ -67,8 +67,10 @@ rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
/* Invoke PMD device initialization function */
retval = devinit(eventdev);
- if (retval == 0)
+ if (retval == 0) {
+ event_dev_probing_finish(eventdev);
return 0;
+ }
RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
" failed", pci_drv->driver.name,
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
new file mode 100644
index 0000000000..9084833847
--- /dev/null
+++ b/lib/eventdev/eventdev_private.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "eventdev_pmd.h"
+#include "rte_eventdev.h"
+
+static uint16_t
+dummy_event_enqueue(__rte_unused void *port,
+ __rte_unused const struct rte_event *ev)
+{
+ RTE_EDEV_LOG_ERR(
+ "event enqueue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_enqueue_burst(__rte_unused void *port,
+ __rte_unused const struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event enqueue burst requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_dequeue(__rte_unused void *port, __rte_unused struct rte_event *ev,
+ __rte_unused uint64_t timeout_ticks)
+{
+ RTE_EDEV_LOG_ERR(
+ "event dequeue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_dequeue_burst(__rte_unused void *port,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events,
+ __rte_unused uint64_t timeout_ticks)
+{
+ RTE_EDEV_LOG_ERR(
+ "event dequeue burst requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue(__rte_unused void *port,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event Tx adapter enqueue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue_same_dest(__rte_unused void *port,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event Tx adapter enqueue same destination requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event crypto adapter enqueue requested for unconfigured event device");
+ return 0;
+}
+
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
+{
+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+ static const struct rte_event_fp_ops dummy = {
+ .enqueue = dummy_event_enqueue,
+ .enqueue_burst = dummy_event_enqueue_burst,
+ .enqueue_new_burst = dummy_event_enqueue_burst,
+ .enqueue_forward_burst = dummy_event_enqueue_burst,
+ .dequeue = dummy_event_dequeue,
+ .dequeue_burst = dummy_event_dequeue_burst,
+ .txa_enqueue = dummy_event_tx_adapter_enqueue,
+ .txa_enqueue_same_dest =
+ dummy_event_tx_adapter_enqueue_same_dest,
+ .ca_enqueue = dummy_event_crypto_adapter_enqueue,
+ .data = dummy_data,
+ };
+
+ *fp_op = dummy;
+}
+
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
+ const struct rte_eventdev *dev)
+{
+ fp_op->enqueue = dev->enqueue;
+ fp_op->enqueue_burst = dev->enqueue_burst;
+ fp_op->enqueue_new_burst = dev->enqueue_new_burst;
+ fp_op->enqueue_forward_burst = dev->enqueue_forward_burst;
+ fp_op->dequeue = dev->dequeue;
+ fp_op->dequeue_burst = dev->dequeue_burst;
+ fp_op->txa_enqueue = dev->txa_enqueue;
+ fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
+ fp_op->ca_enqueue = dev->ca_enqueue;
+ fp_op->data = dev->data->ports;
+}
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 8b51fde361..9051ff04b7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -8,6 +8,7 @@ else
endif
sources = files(
+ 'eventdev_private.c',
'rte_eventdev.c',
'rte_event_ring.c',
'eventdev_trace_points.c',
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index bfcfa31cd1..4c30a37831 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -46,6 +46,9 @@ static struct rte_eventdev_global eventdev_globals = {
.nb_devs = 0
};
+/* Public fastpath APIs. */
+struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
/* Event dev north bound API implementation */
uint8_t
@@ -300,8 +303,8 @@ int
rte_event_dev_configure(uint8_t dev_id,
const struct rte_event_dev_config *dev_conf)
{
- struct rte_eventdev *dev;
struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
int diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -470,10 +473,13 @@ rte_event_dev_configure(uint8_t dev_id,
return diag;
}
+ event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
+
/* Configure the device */
diag = (*dev->dev_ops->dev_configure)(dev);
if (diag != 0) {
RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+ event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
event_dev_queue_config(dev, 0);
event_dev_port_config(dev, 0);
}
@@ -1244,6 +1250,8 @@ rte_event_dev_start(uint8_t dev_id)
else
return diag;
+ event_dev_fp_ops_set(rte_event_fp_ops + dev_id, dev);
+
return 0;
}
@@ -1284,6 +1292,7 @@ rte_event_dev_stop(uint8_t dev_id)
dev->data->dev_started = 0;
(*dev->dev_ops->dev_stop)(dev);
rte_eventdev_trace_stop(dev_id);
+ event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
}
int
@@ -1302,6 +1311,7 @@ rte_event_dev_close(uint8_t dev_id)
return -EBUSY;
}
+ event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
rte_eventdev_trace_close(dev_id);
return (*dev->dev_ops->dev_close)(dev);
}
@@ -1435,6 +1445,7 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
if (eventdev == NULL)
return -EINVAL;
+ event_dev_fp_ops_reset(rte_event_fp_ops + eventdev->data->dev_id);
eventdev->attached = RTE_EVENTDEV_DETACHED;
eventdev_globals.nb_devs--;
@@ -1460,6 +1471,15 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
return 0;
}
+void
+event_dev_probing_finish(struct rte_eventdev *eventdev)
+{
+ if (eventdev == NULL)
+ return;
+
+ event_dev_fp_ops_set(rte_event_fp_ops + eventdev->data->dev_id,
+ eventdev);
+}
static int
handle_dev_list(const char *cmd __rte_unused,
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 115b97e431..4461073101 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -39,6 +39,34 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
+struct rte_event_fp_ops {
+ event_enqueue_t enqueue;
+ /**< PMD enqueue function. */
+ event_enqueue_burst_t enqueue_burst;
+ /**< PMD enqueue burst function. */
+ event_enqueue_burst_t enqueue_new_burst;
+ /**< PMD enqueue burst new function. */
+ event_enqueue_burst_t enqueue_forward_burst;
+ /**< PMD enqueue burst fwd function. */
+ event_dequeue_t dequeue;
+ /**< PMD dequeue function. */
+ event_dequeue_burst_t dequeue_burst;
+ /**< PMD dequeue burst function. */
+ event_tx_adapter_enqueue_t txa_enqueue;
+ /**< PMD Tx adapter enqueue function. */
+ event_tx_adapter_enqueue_t txa_enqueue_same_dest;
+ /**< PMD Tx adapter enqueue same destination function. */
+ event_crypto_adapter_enqueue_t ca_enqueue;
+ /**< PMD Crypto adapter enqueue function. */
+ uintptr_t reserved[2];
+
+ void **data;
+ /**< points to array of internal port data pointers */
+ uintptr_t reserved2[4];
+} __rte_cache_aligned;
+
+extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
#define RTE_EVENTDEV_NAME_MAX_LEN (64)
/**< @internal Max length of name of event PMD */
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 5f1fe412a4..a3a732089b 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -85,6 +85,9 @@ DPDK_22 {
rte_event_timer_cancel_burst;
rte_eventdevs;
+ #added in 21.11
+ rte_event_fp_ops;
+
local: *;
};
@@ -141,6 +144,9 @@ EXPERIMENTAL {
INTERNAL {
global:
+ event_dev_fp_ops_reset;
+ event_dev_fp_ops_set;
+ event_dev_probing_finish;
rte_event_pmd_selftest_seqn_dynfield_offset;
rte_event_pmd_allocate;
rte_event_pmd_get_named_dev;
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v4 2/2] cmdline: make struct rdline opaque
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
@ 2021-10-05 20:15 3% ` Dmitry Kozlyuk
2021-10-07 22:10 4% ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 20:15 UTC (permalink / raw)
To: dev
Cc: Dmitry Kozlyuk, Ali Alnubani, Gregory Etelson, David Marchand,
Olivier Matz, Ray Kinsella
Hide struct rdline definition and some RDLINE_* constants in order
to be able to change internal buffer sizes transparently to the user.
Add new functions:
* rdline_new(): allocate and initialize struct rdline.
This function replaces rdline_init() and takes an extra parameter:
opaque user data for the callbacks.
* rdline_free(): deallocate struct rdline.
* rdline_get_history_buffer_size(): for use in tests.
* rdline_get_opaque(): to obtain user data in callback functions.
Remove rdline_init() function from library headers and export list,
because using it requires the knowledge of sizeof(struct rdline).
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
app/test-cmdline/commands.c | 2 +-
app/test/test_cmdline_lib.c | 22 ++++---
doc/guides/rel_notes/release_21_11.rst | 3 +
lib/cmdline/cmdline.c | 3 +-
lib/cmdline/cmdline_private.h | 49 +++++++++++++++
lib/cmdline/cmdline_rdline.c | 43 ++++++++++++-
lib/cmdline/cmdline_rdline.h | 87 ++++++++++----------------
lib/cmdline/version.map | 8 ++-
8 files changed, 148 insertions(+), 69 deletions(-)
diff --git a/app/test-cmdline/commands.c b/app/test-cmdline/commands.c
index d732976f08..a13e1d1afd 100644
--- a/app/test-cmdline/commands.c
+++ b/app/test-cmdline/commands.c
@@ -297,7 +297,7 @@ cmd_get_history_bufsize_parsed(__rte_unused void *parsed_result,
struct rdline *rdl = cmdline_get_rdline(cl);
cmdline_printf(cl, "History buffer size: %zu\n",
- sizeof(rdl->history_buf));
+ rdline_get_history_buffer_size(rdl));
}
cmdline_parse_token_string_t cmd_get_history_bufsize_tok =
diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c
index d5a09b4541..24bc03fccb 100644
--- a/app/test/test_cmdline_lib.c
+++ b/app/test/test_cmdline_lib.c
@@ -83,18 +83,19 @@ test_cmdline_parse_fns(void)
static int
test_cmdline_rdline_fns(void)
{
- struct rdline rdl;
+ struct rdline *rdl = NULL;
rdline_write_char_t *wc = &cmdline_write_char;
rdline_validate_t *v = &valid_buffer;
rdline_complete_t *c = &complete_buffer;
- if (rdline_init(NULL, wc, v, c) >= 0)
+ rdl = rdline_new(NULL, v, c, NULL);
+ if (rdl != NULL)
goto error;
- if (rdline_init(&rdl, NULL, v, c) >= 0)
+ rdl = rdline_new(wc, NULL, c, NULL);
+ if (rdl != NULL)
goto error;
- if (rdline_init(&rdl, wc, NULL, c) >= 0)
- goto error;
- if (rdline_init(&rdl, wc, v, NULL) >= 0)
+ rdl = rdline_new(wc, v, NULL, NULL);
+ if (rdl != NULL)
goto error;
if (rdline_char_in(NULL, 0) >= 0)
goto error;
@@ -102,25 +103,30 @@ test_cmdline_rdline_fns(void)
goto error;
if (rdline_add_history(NULL, "history") >= 0)
goto error;
- if (rdline_add_history(&rdl, NULL) >= 0)
+ if (rdline_add_history(rdl, NULL) >= 0)
goto error;
if (rdline_get_history_item(NULL, 0) != NULL)
goto error;
/* void functions */
+ rdline_get_history_buffer_size(NULL);
+ rdline_get_opaque(NULL);
rdline_newline(NULL, "prompt");
- rdline_newline(&rdl, NULL);
+ rdline_newline(rdl, NULL);
rdline_stop(NULL);
rdline_quit(NULL);
rdline_restart(NULL);
rdline_redisplay(NULL);
rdline_reset(NULL);
rdline_clear_history(NULL);
+ rdline_free(NULL);
+ rdline_free(rdl);
return 0;
error:
printf("Error: function accepted null parameter!\n");
+ rdline_free(rdl);
return -1;
}
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 18377e5813..af11f4a656 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -103,6 +103,9 @@ API Changes
* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+* cmdline: Made ``rdline`` structure definition hidden. Functions are added
+ to dynamically allocate and free it, and to access user data in callbacks.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline.c b/lib/cmdline/cmdline.c
index a176d15130..8f1854cb0b 100644
--- a/lib/cmdline/cmdline.c
+++ b/lib/cmdline/cmdline.c
@@ -85,13 +85,12 @@ cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
cl->ctx = ctx;
ret = rdline_init(&cl->rdl, cmdline_write_char, cmdline_valid_buffer,
- cmdline_complete_buffer);
+ cmdline_complete_buffer, cl);
if (ret != 0) {
free(cl);
return NULL;
}
- cl->rdl.opaque = cl;
cmdline_set_prompt(cl, prompt);
rdline_newline(&cl->rdl, cl->prompt);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index 2e93674c66..c2e906d8de 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -17,6 +17,49 @@
#include <cmdline.h>
+#define RDLINE_BUF_SIZE 512
+#define RDLINE_PROMPT_SIZE 32
+#define RDLINE_VT100_BUF_SIZE 8
+#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
+#define RDLINE_HISTORY_MAX_LINE 64
+
+enum rdline_status {
+ RDLINE_INIT,
+ RDLINE_RUNNING,
+ RDLINE_EXITED
+};
+
+struct rdline {
+ enum rdline_status status;
+ /* rdline bufs */
+ struct cirbuf left;
+ struct cirbuf right;
+ char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
+ char right_buf[RDLINE_BUF_SIZE];
+
+ char prompt[RDLINE_PROMPT_SIZE];
+ unsigned int prompt_size;
+
+ char kill_buf[RDLINE_BUF_SIZE];
+ unsigned int kill_size;
+
+ /* history */
+ struct cirbuf history;
+ char history_buf[RDLINE_HISTORY_BUF_SIZE];
+ int history_cur_line;
+
+ /* callbacks and func pointers */
+ rdline_write_char_t *write_char;
+ rdline_validate_t *validate;
+ rdline_complete_t *complete;
+
+ /* vt100 parser */
+ struct cmdline_vt100 vt100;
+
+ /* opaque pointer */
+ void *opaque;
+};
+
#ifdef RTE_EXEC_ENV_WINDOWS
struct terminal {
DWORD input_mode;
@@ -57,4 +100,10 @@ ssize_t cmdline_read_char(struct cmdline *cl, char *c);
__rte_format_printf(2, 0)
int cmdline_vdprintf(int fd, const char *format, va_list op);
+int rdline_init(struct rdline *rdl,
+ rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque);
+
#endif
diff --git a/lib/cmdline/cmdline_rdline.c b/lib/cmdline/cmdline_rdline.c
index 2cb53e38f2..d92b1cda53 100644
--- a/lib/cmdline/cmdline_rdline.c
+++ b/lib/cmdline/cmdline_rdline.c
@@ -13,6 +13,7 @@
#include <ctype.h>
#include "cmdline_cirbuf.h"
+#include "cmdline_private.h"
#include "cmdline_rdline.h"
static void rdline_puts(struct rdline *rdl, const char *buf);
@@ -37,9 +38,10 @@ isblank2(char c)
int
rdline_init(struct rdline *rdl,
- rdline_write_char_t *write_char,
- rdline_validate_t *validate,
- rdline_complete_t *complete)
+ rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque)
{
if (!rdl || !write_char || !validate || !complete)
return -EINVAL;
@@ -47,10 +49,33 @@ rdline_init(struct rdline *rdl,
rdl->validate = validate;
rdl->complete = complete;
rdl->write_char = write_char;
+ rdl->opaque = opaque;
rdl->status = RDLINE_INIT;
return cirbuf_init(&rdl->history, rdl->history_buf, 0, RDLINE_HISTORY_BUF_SIZE);
}
+struct rdline *
+rdline_new(rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque)
+{
+ struct rdline *rdl;
+
+ rdl = malloc(sizeof(*rdl));
+ if (rdline_init(rdl, write_char, validate, complete, opaque) < 0) {
+ free(rdl);
+ rdl = NULL;
+ }
+ return rdl;
+}
+
+void
+rdline_free(struct rdline *rdl)
+{
+ free(rdl);
+}
+
void
rdline_newline(struct rdline *rdl, const char *prompt)
{
@@ -564,6 +589,18 @@ rdline_get_history_item(struct rdline * rdl, unsigned int idx)
return NULL;
}
+size_t
+rdline_get_history_buffer_size(struct rdline *rdl)
+{
+ return sizeof(rdl->history_buf);
+}
+
+void *
+rdline_get_opaque(struct rdline *rdl)
+{
+ return rdl != NULL ? rdl->opaque : NULL;
+}
+
int
rdline_add_history(struct rdline * rdl, const char * buf)
{
diff --git a/lib/cmdline/cmdline_rdline.h b/lib/cmdline/cmdline_rdline.h
index d2170293de..af66b70495 100644
--- a/lib/cmdline/cmdline_rdline.h
+++ b/lib/cmdline/cmdline_rdline.h
@@ -10,9 +10,7 @@
/**
* This file is a small equivalent to the GNU readline library, but it
* was originally designed for small systems, like Atmel AVR
- * microcontrollers (8 bits). Indeed, we don't use any malloc that is
- * sometimes not implemented (or just not recommended) on such
- * systems.
+ * microcontrollers (8 bits). It only uses malloc() on object creation.
*
* Obviously, it does not support as many things as the GNU readline,
* but at least it supports some interesting features like a kill
@@ -31,6 +29,7 @@
*/
#include <stdio.h>
+#include <rte_compat.h>
#include <cmdline_cirbuf.h>
#include <cmdline_vt100.h>
@@ -38,19 +37,6 @@
extern "C" {
#endif
-/* configuration */
-#define RDLINE_BUF_SIZE 512
-#define RDLINE_PROMPT_SIZE 32
-#define RDLINE_VT100_BUF_SIZE 8
-#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
-#define RDLINE_HISTORY_MAX_LINE 64
-
-enum rdline_status {
- RDLINE_INIT,
- RDLINE_RUNNING,
- RDLINE_EXITED
-};
-
struct rdline;
typedef int (rdline_write_char_t)(struct rdline *rdl, char);
@@ -60,52 +46,33 @@ typedef int (rdline_complete_t)(struct rdline *rdl, const char *buf,
char *dstbuf, unsigned int dstsize,
int *state);
-struct rdline {
- enum rdline_status status;
- /* rdline bufs */
- struct cirbuf left;
- struct cirbuf right;
- char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
- char right_buf[RDLINE_BUF_SIZE];
-
- char prompt[RDLINE_PROMPT_SIZE];
- unsigned int prompt_size;
-
- char kill_buf[RDLINE_BUF_SIZE];
- unsigned int kill_size;
-
- /* history */
- struct cirbuf history;
- char history_buf[RDLINE_HISTORY_BUF_SIZE];
- int history_cur_line;
-
- /* callbacks and func pointers */
- rdline_write_char_t *write_char;
- rdline_validate_t *validate;
- rdline_complete_t *complete;
-
- /* vt100 parser */
- struct cmdline_vt100 vt100;
-
- /* opaque pointer */
- void *opaque;
-};
-
/**
- * Init fields for a struct rdline. Call this only once at the beginning
- * of your program.
- * \param rdl A pointer to an uninitialized struct rdline
+ * Allocate and initialize a new rdline instance.
+ *
+ * \param rdl Receives a pointer to the allocated structure.
* \param write_char The function used by the function to write a character
* \param validate A pointer to the function to execute when the
* user validates the buffer.
* \param complete A pointer to the function to execute when the
* user completes the buffer.
+ * \param opaque User data for use in the callbacks.
+ *
+ * \return 0 on success, negative errno-style code in failure.
*/
-int rdline_init(struct rdline *rdl,
- rdline_write_char_t *write_char,
- rdline_validate_t *validate,
- rdline_complete_t *complete);
+__rte_experimental
+struct rdline *rdline_new(rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque);
+/**
+ * Free an rdline instance.
+ *
+ * \param rdl A pointer to an initialized struct rdline.
+ * If NULL, this function is a no-op.
+ */
+__rte_experimental
+void rdline_free(struct rdline *rdl);
/**
* Init the current buffer, and display a prompt.
@@ -194,6 +161,18 @@ void rdline_clear_history(struct rdline *rdl);
*/
char *rdline_get_history_item(struct rdline *rdl, unsigned int i);
+/**
+ * Get maximum history buffer size.
+ */
+__rte_experimental
+size_t rdline_get_history_buffer_size(struct rdline *rdl);
+
+/**
+ * Get the opaque pointer supplied on struct rdline creation.
+ */
+__rte_experimental
+void *rdline_get_opaque(struct rdline *rdl);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/cmdline/version.map b/lib/cmdline/version.map
index 980adb4f23..b9bbb87510 100644
--- a/lib/cmdline/version.map
+++ b/lib/cmdline/version.map
@@ -57,7 +57,6 @@ DPDK_22 {
rdline_clear_history;
rdline_get_buffer;
rdline_get_history_item;
- rdline_init;
rdline_newline;
rdline_quit;
rdline_redisplay;
@@ -73,7 +72,14 @@ DPDK_22 {
EXPERIMENTAL {
global:
+ # added in 20.11
cmdline_get_rdline;
+ # added in 21.11
+ rdline_new;
+ rdline_free;
+ rdline_get_history_buffer_size;
+ rdline_get_opaque;
+
local: *;
};
--
2.29.3
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
@ 2021-10-05 20:15 4% ` Dmitry Kozlyuk
2021-10-05 20:15 3% ` [dpdk-dev] [PATCH v4 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2021-10-07 22:10 4% ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 20:15 UTC (permalink / raw)
To: dev; +Cc: Dmitry Kozlyuk, David Marchand, Olivier Matz, Ray Kinsella
Remove the definition of `struct cmdline` from public header.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2020-September/183310.html
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_21_11.rst | 2 ++
lib/cmdline/cmdline.h | 19 -------------------
lib/cmdline/cmdline_private.h | 8 +++++++-
4 files changed, 9 insertions(+), 24 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..a404276fa2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,10 +275,6 @@ Deprecation Notices
* metrics: The function ``rte_metrics_init`` will have a non-void return
in order to notify errors instead of calling ``rte_exit``.
-* cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
- content. On Linux and FreeBSD, supported prior to DPDK 20.11,
- original structure will be kept until DPDK 21.11.
-
* security: The functions ``rte_security_set_pkt_metadata`` and
``rte_security_get_userdata`` will be made inline functions and additional
flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b55900936d..18377e5813 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -101,6 +101,8 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline.h b/lib/cmdline/cmdline.h
index c29762ddae..96674dfda2 100644
--- a/lib/cmdline/cmdline.h
+++ b/lib/cmdline/cmdline.h
@@ -7,10 +7,6 @@
#ifndef _CMDLINE_H_
#define _CMDLINE_H_
-#ifndef RTE_EXEC_ENV_WINDOWS
-#include <termios.h>
-#endif
-
#include <rte_common.h>
#include <rte_compat.h>
@@ -27,23 +23,8 @@
extern "C" {
#endif
-#ifndef RTE_EXEC_ENV_WINDOWS
-
-struct cmdline {
- int s_in;
- int s_out;
- cmdline_parse_ctx_t *ctx;
- struct rdline rdl;
- char prompt[RDLINE_PROMPT_SIZE];
- struct termios oldterm;
-};
-
-#else
-
struct cmdline;
-#endif /* RTE_EXEC_ENV_WINDOWS */
-
struct cmdline *cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out);
void cmdline_set_prompt(struct cmdline *cl, const char *prompt);
void cmdline_free(struct cmdline *cl);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index a87c45275c..2e93674c66 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -11,6 +11,8 @@
#include <rte_os_shim.h>
#ifdef RTE_EXEC_ENV_WINDOWS
#include <rte_windows.h>
+#else
+#include <termios.h>
#endif
#include <cmdline.h>
@@ -22,6 +24,7 @@ struct terminal {
int is_console_input;
int is_console_output;
};
+#endif
struct cmdline {
int s_in;
@@ -29,11 +32,14 @@ struct cmdline {
cmdline_parse_ctx_t *ctx;
struct rdline rdl;
char prompt[RDLINE_PROMPT_SIZE];
+#ifdef RTE_EXEC_ENV_WINDOWS
struct terminal oldterm;
char repeated_char;
WORD repeat_count;
-};
+#else
+ struct termios oldterm;
#endif
+};
/* Disable buffering and echoing, save previous settings to oldterm. */
void terminal_adjust(struct cmdline *cl);
--
2.29.3
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI
2021-10-05 0:55 4% ` [dpdk-dev] [PATCH v3 " Dmitry Kozlyuk
2021-10-05 0:55 4% ` [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-05 0:55 3% ` [dpdk-dev] [PATCH v3 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
@ 2021-10-05 20:15 4% ` Dmitry Kozlyuk
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
` (2 more replies)
2 siblings, 3 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 20:15 UTC (permalink / raw)
To: dev; +Cc: Dmitry Kozlyuk
Hide struct cmdline following the deprecation notice.
Hide struct rdline following the v1 discussion.
v4: rdline_create -> rdline_new, restore empty line (Olivier).
v3: add experimental tags and releae notes for rdline.
v2: also hide struct rdline (David, Olivier).
Dmitry Kozlyuk (2):
cmdline: make struct cmdline opaque
cmdline: make struct rdline opaque
app/test-cmdline/commands.c | 2 +-
app/test/test_cmdline_lib.c | 22 ++++---
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_21_11.rst | 5 ++
lib/cmdline/cmdline.c | 3 +-
lib/cmdline/cmdline.h | 19 ------
lib/cmdline/cmdline_private.h | 57 ++++++++++++++++-
lib/cmdline/cmdline_rdline.c | 43 ++++++++++++-
lib/cmdline/cmdline_rdline.h | 87 ++++++++++----------------
lib/cmdline/version.map | 8 ++-
10 files changed, 157 insertions(+), 93 deletions(-)
--
2.29.3
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
2021-10-05 13:09 0% ` Thomas Monjalon
@ 2021-10-05 16:41 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-05 16:41 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
Jay
> 04/10/2021 15:55, Konstantin Ananyev:
> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a
> > separate flat array. That array will remain in a public header.
> > The intention here is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev structures
> > to be transparent to the user and help to avoid ABI/API breakages.
> > The plan is to keep minimal part of data from rte_eth_dev public,
> > so we still can use inline functions for 'fast' calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>
> I don't understand why 'fast' is quoted.
> It looks strange.
>
>
> > +/* reset eth 'fast' API to dummy values */
> > +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> > +
> > +/* setup eth 'fast' API to ethdev values */
> > +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > + const struct rte_eth_dev *dev);
>
> I assume "fp" stands for fast path.
Yes.
> Please write "fast path" completely in the comments.
Ok.
> > + /* expose selection of PMD rx/tx function */
> > + eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> [...]
> > + /* point rx/tx functions to dummy ones */
> > + eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
>
> Nit: Rx/Tx
> or could be "fast path", to be consistent.
>
> > + /*
> > + * for secondary process, at that point we expect device
> > + * to be already 'usable', so shared data and all function pointers
> > + * for 'fast' devops have to be setup properly inside rte_eth_dev.
> > + */
> > + if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > + eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> > +
> > rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
> >
> > dev->state = RTE_ETH_DEV_ATTACHED;
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index 948c0b71c1..fe47a660c7 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> > typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> > /**< @internal Check the status of a Tx descriptor */
> >
> > +/**
> > + * @internal
> > + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
>
> typos in above line
>
> > + * queues data.
> > + * The main purpose to expose these pointers at all - allow compiler
> > + * to fetch this data for 'fast' ethdev inline functions in advance.
> > + */
> > +struct rte_ethdev_qdata {
> > + void **data;
> > + /**< points to array of internal queue data pointers */
> > + void **clbk;
> > + /**< points to array of queue callback data pointers */
> > +};
> > +
> > +/**
> > + * @internal
> > + * 'fast' ethdev funcions and related data are hold in a flat array.
> > + * one entry per ethdev.
> > + */
> > +struct rte_eth_fp_ops {
> > +
> > + /** first 64B line */
> > + eth_rx_burst_t rx_pkt_burst;
> > + /**< PMD receive function. */
> > + eth_tx_burst_t tx_pkt_burst;
> > + /**< PMD transmit function. */
> > + eth_tx_prep_t tx_pkt_prepare;
> > + /**< PMD transmit prepare function. */
> > + eth_rx_queue_count_t rx_queue_count;
> > + /**< Get the number of used RX descriptors. */
> > + eth_rx_descriptor_status_t rx_descriptor_status;
> > + /**< Check the status of a Rx descriptor. */
> > + eth_tx_descriptor_status_t tx_descriptor_status;
> > + /**< Check the status of a Tx descriptor. */
> > + uintptr_t reserved[2];
>
> uintptr_t size is not fix.
> I think you mean uint64_t.
Nope, I meant 'uintptr_t' here.
That way it fits really nicely to both 64-bit and 32-bit systems.
For 64-bit systems we have all function pointers on first 64B line,
and all data pointers on second 64B line.
For 32-bit systems we have all fields within first 64B line.
> > +
> > + /** second 64B line */
> > + struct rte_ethdev_qdata rxq;
> > + struct rte_ethdev_qdata txq;
> > + uintptr_t reserved2[4];
> > +
> > +} __rte_cache_aligned;
> > +
> > +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC 0/7] make rte_intr_handle internal
2021-10-05 12:14 4% ` [dpdk-dev] [PATCH v2 0/6] make rte_intr_handle internal Harman Kalra
@ 2021-10-05 16:07 0% ` Stephen Hemminger
2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-05 16:07 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev
On Thu, 26 Aug 2021 20:27:19 +0530
Harman Kalra <hkalra@marvell.com> wrote:
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
>
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are defined
> and also hides struct rte_intr_handle definition.
I agree rte_intr_handle and eth_devices structure needs to be hidden.
But there does not appear to be an API to check if device supports
receive interrupt mode.
There is:
RTE_ETH_DEV_INTR_LSC - link state
RTE_ETH_DEV_INTR_RMV - interrupt on removal
but no
RTE_ETH_DEV_INTR_RXQ - device supports rxq interrupt
There should be a new flag reported by devices, and the intr_conf should
be checked in rte_eth_dev_configure
Doing this would require fixes many drivers and there is risk of exposing
existing sematic bugs in applications.
code
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] sort symbols map
2021-10-05 14:16 0% ` Kinsella, Ray
2021-10-05 14:31 0% ` David Marchand
@ 2021-10-05 15:06 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-10-05 15:06 UTC (permalink / raw)
To: David Marchand
Cc: dev, Kinsella, Ray, Thomas Monjalon, Yigit, Ferruh,
Jasvinder Singh, Cristian Dumitrescu, Vladimir Medvedkin,
Conor Walsh, Stephen Hemminger
On Tue, Oct 5, 2021 at 4:17 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
> On 05/10/2021 10:16, David Marchand wrote:
> > Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
> >
> > Fixes: e73a7ab22422 ("net/softnic: promote manage API")
> > Fixes: 8f532a34c4f2 ("fib: promote API to stable")
> > Fixes: 4aeb92396b85 ("rib: promote API to stable")
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
Applied, thanks.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] sort symbols map
2021-10-05 14:16 0% ` Kinsella, Ray
@ 2021-10-05 14:31 0% ` David Marchand
2021-10-05 15:06 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-10-05 14:31 UTC (permalink / raw)
To: Kinsella, Ray
Cc: dev, Thomas Monjalon, Yigit, Ferruh, Jasvinder Singh,
Cristian Dumitrescu, Vladimir Medvedkin, Conor Walsh,
Stephen Hemminger
On Tue, Oct 5, 2021 at 4:17 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
> On 05/10/2021 10:16, David Marchand wrote:
> > Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
> >
> > Fixes: e73a7ab22422 ("net/softnic: promote manage API")
> > Fixes: 8f532a34c4f2 ("fib: promote API to stable")
> > Fixes: 4aeb92396b85 ("rib: promote API to stable")
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > ---
> > I added "./devtools/update-abi.sh $(cat ABI_VERSION)" to my checks.
> >
> > I should have caught it when merging fib and rib patches...
> > But my eyes (or more likely brain) stopped at net/softnic bits.
> >
> > What do you think?
> > Should I wait a bit more and send a global patch to catch any missed
> > sorting just before rc1?
> >
> > In the meantime, if you merge .map updates, try to remember to run the
> > command above.
> >
> > Thanks.
> > ---
> > drivers/net/softnic/version.map | 2 +-
> > lib/fib/version.map | 21 ++++++++++-----------
> > lib/rib/version.map | 33 ++++++++++++++++-----------------
> > 3 files changed, 27 insertions(+), 29 deletions(-)
> >
>
> Something to add to the Symbol Bot also, maybe?
The committed maps should have no issue in the first place.
The best place would probably be in checkpatches.sh so that developers
get the warning before even posting and so that maintainers fix the
issues before pushing.
But it requires checked out sources.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] sort symbols map
2021-10-05 9:16 4% [dpdk-dev] [PATCH] sort symbols map David Marchand
@ 2021-10-05 14:16 0% ` Kinsella, Ray
2021-10-05 14:31 0% ` David Marchand
2021-10-05 15:06 0% ` David Marchand
2021-10-11 11:36 0% ` Dumitrescu, Cristian
1 sibling, 2 replies; 200+ results
From: Kinsella, Ray @ 2021-10-05 14:16 UTC (permalink / raw)
To: David Marchand, dev
Cc: thomas, ferruh.yigit, Jasvinder Singh, Cristian Dumitrescu,
Vladimir Medvedkin, Conor Walsh, Stephen Hemminger
On 05/10/2021 10:16, David Marchand wrote:
> Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
>
> Fixes: e73a7ab22422 ("net/softnic: promote manage API")
> Fixes: 8f532a34c4f2 ("fib: promote API to stable")
> Fixes: 4aeb92396b85 ("rib: promote API to stable")
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> I added "./devtools/update-abi.sh $(cat ABI_VERSION)" to my checks.
>
> I should have caught it when merging fib and rib patches...
> But my eyes (or more likely brain) stopped at net/softnic bits.
>
> What do you think?
> Should I wait a bit more and send a global patch to catch any missed
> sorting just before rc1?
>
> In the meantime, if you merge .map updates, try to remember to run the
> command above.
>
> Thanks.
> ---
> drivers/net/softnic/version.map | 2 +-
> lib/fib/version.map | 21 ++++++++++-----------
> lib/rib/version.map | 33 ++++++++++++++++-----------------
> 3 files changed, 27 insertions(+), 29 deletions(-)
>
Something to add to the Symbol Bot also, maybe?
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
2021-10-04 13:55 2% ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
@ 2021-10-05 13:09 0% ` Thomas Monjalon
2021-10-05 16:41 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-05 13:09 UTC (permalink / raw)
To: Konstantin Ananyev
Cc: dev, xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, ferruh.yigit, mdr, jay.jayatheerthan
04/10/2021 15:55, Konstantin Ananyev:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for 'fast' calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
I don't understand why 'fast' is quoted.
It looks strange.
> +/* reset eth 'fast' API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth 'fast' API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> + const struct rte_eth_dev *dev);
I assume "fp" stands for fast path.
Please write "fast path" completely in the comments.
> + /* expose selection of PMD rx/tx function */
> + eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
[...]
> + /* point rx/tx functions to dummy ones */
> + eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
Nit: Rx/Tx
or could be "fast path", to be consistent.
> + /*
> + * for secondary process, at that point we expect device
> + * to be already 'usable', so shared data and all function pointers
> + * for 'fast' devops have to be setup properly inside rte_eth_dev.
> + */
> + if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> + eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> +
> rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>
> dev->state = RTE_ETH_DEV_ATTACHED;
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 948c0b71c1..fe47a660c7 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> /**< @internal Check the status of a Tx descriptor */
>
> +/**
> + * @internal
> + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
typos in above line
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for 'fast' ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> + void **data;
> + /**< points to array of internal queue data pointers */
> + void **clbk;
> + /**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * 'fast' ethdev funcions and related data are hold in a flat array.
> + * one entry per ethdev.
> + */
> +struct rte_eth_fp_ops {
> +
> + /** first 64B line */
> + eth_rx_burst_t rx_pkt_burst;
> + /**< PMD receive function. */
> + eth_tx_burst_t tx_pkt_burst;
> + /**< PMD transmit function. */
> + eth_tx_prep_t tx_pkt_prepare;
> + /**< PMD transmit prepare function. */
> + eth_rx_queue_count_t rx_queue_count;
> + /**< Get the number of used RX descriptors. */
> + eth_rx_descriptor_status_t rx_descriptor_status;
> + /**< Check the status of a Rx descriptor. */
> + eth_tx_descriptor_status_t tx_descriptor_status;
> + /**< Check the status of a Tx descriptor. */
> + uintptr_t reserved[2];
uintptr_t size is not fix.
I think you mean uint64_t.
> +
> + /** second 64B line */
> + struct rte_ethdev_qdata rxq;
> + struct rte_ethdev_qdata txq;
> + uintptr_t reserved2[4];
> +
> +} __rte_cache_aligned;
> +
> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle
2021-10-05 12:14 4% ` [dpdk-dev] [PATCH v2 0/6] make rte_intr_handle internal Harman Kalra
@ 2021-10-05 12:14 1% ` Harman Kalra
0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-10-05 12:14 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson; +Cc: david.marchand, dmitry.kozliuk, mdr
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 111 ++++++++----
lib/eal/include/rte_interrupts.h | 2 +
lib/eal/linux/eal_interrupts.c | 302 +++++++++++++++++++------------
3 files changed, 268 insertions(+), 147 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..cf6216601b 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -86,10 +86,11 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
{
struct rte_intr_callback *callback;
struct rte_intr_source *src;
- int ret = 0, add_event = 0;
+ int ret = 0, add_event = 0, mem_allocator;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,9 +137,35 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ /* src->interrupt instance memory allocated
+ * depends on from where intr_handle memory
+ * is allocated.
+ */
+ mem_allocator =
+ rte_intr_instance_mem_allocator_get(
+ intr_handle);
+ if (mem_allocator == 0)
+ src->intr_handle =
+ rte_intr_instance_alloc(
+ RTE_INTR_ALLOC_TRAD_HEAP);
+ else if (mem_allocator == 1)
+ src->intr_handle =
+ rte_intr_instance_alloc(
+ RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ else
+ RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
+
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
}
@@ -151,7 +179,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +202,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +243,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +258,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +299,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +313,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +346,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +398,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +422,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +440,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +464,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +476,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +499,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +512,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +583,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +595,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +607,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index db830907fb..442b02de8f 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -28,6 +28,8 @@ struct rte_intr_handle;
/** Interrupt instance allocation flags
* @see rte_intr_instance_alloc
*/
+/** Allocate interrupt instance from traditional heap */
+#define RTE_INTR_ALLOC_TRAD_HEAP 0x00000000
/** Allocate interrupt instance using DPDK memory management APIs */
#define RTE_INTR_ALLOC_DPDK_ALLOCATOR 0x00000001
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..a9d6833b79 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -475,14 +498,15 @@ int
rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *cb_arg)
{
- int ret, wake_thread;
+ int ret, wake_thread, mem_allocator;
struct rte_intr_source *src;
struct rte_intr_callback *callback;
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,34 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ /* src->interrupt instance memory allocated depends on
+ * from where intr_handle memory is allocated.
+ */
+ mem_allocator =
+ rte_intr_instance_mem_allocator_get(intr_handle);
+ if (mem_allocator == 0)
+ src->intr_handle = rte_intr_instance_alloc(
+ RTE_INTR_ALLOC_TRAD_HEAP);
+ else if (mem_allocator == 1)
+ src->intr_handle = rte_intr_instance_alloc(
+ RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ else
+ RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
+
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +602,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +612,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +653,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +663,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +695,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +727,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +785,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +808,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +851,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +861,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +919,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +952,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +965,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1029,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1069,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1079,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1182,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1246,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1259,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1480,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1489,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1503,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1515,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1540,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1562,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1571,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0,
+ rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1608,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) >
+ rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1630,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 0/6] make rte_intr_handle internal
@ 2021-10-05 12:14 4% ` Harman Kalra
2021-10-05 12:14 1% ` [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-05 16:07 0% ` [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Stephen Hemminger
2 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-10-05 12:14 UTC (permalink / raw)
To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 2: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 3: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 4: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 5: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 6: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
Harman Kalra (6):
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 163 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 9 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 15 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 15 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 +++-
drivers/bus/pci/pci_common.c | 29 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 5 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 106 +--
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 23 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 21 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 111 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 61 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 54 +-
drivers/net/mlx5/linux/mlx5_socket.c | 24 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 26 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 36 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +-
drivers/net/thunderx/nicvf_ethdev.c | 12 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 34 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 75 +-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 48 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 10 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 649 ++++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 52 +-
lib/eal/freebsd/eal_interrupts.c | 110 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 634 ++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 37 +-
lib/eal/linux/eal_dev.c | 63 +-
lib/eal/linux/eal_interrupts.c | 302 +++++---
lib/eal/version.map | 46 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
131 files changed, 3645 insertions(+), 1706 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
2021-10-05 10:04 0% ` David Marchand
@ 2021-10-05 10:43 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-10-05 10:43 UTC (permalink / raw)
To: David Marchand, Konstantin Ananyev, Thomas Monjalon
Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
Chenbo, Thomas Monjalon, Ray Kinsella, Jayatheerthan, Jay
On 10/5/2021 11:04 AM, David Marchand wrote:
> On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
> <konstantin.ananyev@intel.com> wrote:
>>
>> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>> data into private header (ethdev_driver.h).
>> Few minor changes to keep DPDK building after that.
>
> This change is going to hurt a lot of people :-).
> But this is a necessary move.
>
+1 that it is necessary move, but I am surprised to see how much 'rte_eth_devices'
is accessed directly.
Do you have any idea/suggestion on how can we reduce the pain for them?
> $ git grep-all -lw rte_eth_devices |grep -v \\.patch$
> ANS/ans/ans_main.c
> BESS/core/drivers/pmd.cc
> dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/qdma_xdebug.c
> dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/rte_pmd_qdma.c
> dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/pcierw.c
> dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/testapp.c
> FD.io-VPP/src/plugins/dpdk/device/format.c
> lagopus/src/dataplane/dpdk/dpdk_io.c
> OVS/lib/netdev-dpdk.c
> packet-journey/app/kni.c
> pktgen-dpdk/app/pktgen-port-cfg.c
> pktgen-dpdk/app/pktgen-port-cfg.h
> pktgen-dpdk/app/pktgen-stats.c
> Trex/src/dpdk_funcs.c
> Trex/src/drivers/trex_i40e_fdir.c
> Trex/src/drivers/trex_ixgbe_fdir.c
> TungstenFabric-vRouter/gdb/vr_dpdk.gdb
>
>
> I did not check all projects for their uses of rte_eth_devices, but I
> did the job for OVS.
> If you have cycles to review...
> https://patchwork.ozlabs.org/project/openvswitch/patch/20210907082343.16370-1-david.marchand@redhat.com/
>
> One nit:
>
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>> doc/guides/rel_notes/release_21_11.rst | 6 +
>> drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
>> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
>> drivers/net/cxgbe/base/adapter.h | 2 +-
>> drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
>> drivers/net/netvsc/hn_var.h | 1 +
>> lib/ethdev/ethdev_driver.h | 149 ++++++++++++++++++
>> lib/ethdev/rte_ethdev_core.h | 143 -----------------
>> lib/ethdev/version.map | 2 +-
>> lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
>> lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
>> lib/eventdev/rte_eventdev.c | 2 +-
>> lib/metrics/rte_metrics_telemetry.c | 2 +-
>> 13 files changed, 165 insertions(+), 152 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
>> index 6055551443..2944149943 100644
>> --- a/doc/guides/rel_notes/release_21_11.rst
>> +++ b/doc/guides/rel_notes/release_21_11.rst
>> @@ -228,6 +228,12 @@ ABI Changes
>> to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
>> is used by public inline function ``rte_eth_rx_queue_count``.
>>
>> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
>> + private data structures. ``rte_eth_devices[]`` can't be accessible directly
>
> accessed*
>
>> + by user any more. While it is an ABI breakage, this change is intended
>> + to be transparent for both users (no changes in user app is required) and
>> + PMD developers (no changes in PMD is required).
>> +
>>
>> Known Issues
>> ------------
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
2021-10-05 9:54 0% ` David Marchand
@ 2021-10-05 10:13 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-05 10:13 UTC (permalink / raw)
To: David Marchand
Cc: dev, Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
Daley, John, Hyong Youb Kim, Zhang, Qi Z, Wang, Xiao W, humin (Q),
Yisen Zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
Jayatheerthan, Jay
> >
> > Rework 'fast' burst functions to use rte_eth_fp_ops[].
> > While it is an API/ABI breakage, this change is intended to be
> > transparent for both users (no changes in user app is required) and
> > PMD developers (no changes in PMD is required).
> > One extra thing to note - RX/TX callback invocation will cause extra
> > function call with these changes. That might cause some insignificant
> > slowdown for code-path where RX/TX callbacks are heavily involved.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> > lib/ethdev/ethdev_private.c | 31 +++++
> > lib/ethdev/rte_ethdev.h | 242 ++++++++++++++++++++++++++----------
> > lib/ethdev/version.map | 5 +
> > 3 files changed, 210 insertions(+), 68 deletions(-)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 3eeda6e9f9..27d29b2ac6 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > fpo->txq.data = dev->data->tx_queues;
> > fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > }
> > +
> > +uint16_t
> > +__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
> > + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> > + void *opaque)
> > +{
> > + const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > + while (cb != NULL) {
> > + nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> > + nb_pkts, cb->param);
> > + cb = cb->next;
> > + }
> > +
> > + return nb_rx;
> > +}
>
> This helper name is ambiguous.
> Maybe the intent was to have a generic place holder for updates in
> future releases.
Yes, that was the intent.
We have array of opaque pointers (one per queue).
So I thought some generic name would be better - who knows
how we would need to change this function and its parameters in future.
> But in this series, __rte_eth_rx_epilog is invoked only if a rx
> callback is registered, under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.
Hmm, yes it implies that we'll do callback underneath :)
> I'd prefer we call it a spade, i.e. rte_eth_call_rx_callbacks,
If there are no objections from other people - I am ok to rename it.
> and it
> does not need to be advertised as internal.
About internal vs public, I think Ferruh proposed the same.
I am not really fond of it as:
if we'll declare it public, we will have obligations to support it in future releases.
Plus it might encourage users to use it on its own, which I don't think is a right thing to do.
>
>
> > +
> > +uint16_t
> > +__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
> > + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> > +{
> > + const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > + while (cb != NULL) {
> > + nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> > + cb->param);
> > + cb = cb->next;
> > + }
> > +
> > + return nb_pkts;
> > +}
>
> Idem, rte_eth_call_tx_callbacks.
>
>
> --
> David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
2021-10-04 13:56 9% ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-05 10:04 0% ` David Marchand
2021-10-05 10:43 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-05 10:04 UTC (permalink / raw)
To: Konstantin Ananyev
Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
Jayatheerthan, Jay
On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> data into private header (ethdev_driver.h).
> Few minor changes to keep DPDK building after that.
This change is going to hurt a lot of people :-).
But this is a necessary move.
$ git grep-all -lw rte_eth_devices |grep -v \\.patch$
ANS/ans/ans_main.c
BESS/core/drivers/pmd.cc
dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/qdma_xdebug.c
dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/rte_pmd_qdma.c
dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/pcierw.c
dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/testapp.c
FD.io-VPP/src/plugins/dpdk/device/format.c
lagopus/src/dataplane/dpdk/dpdk_io.c
OVS/lib/netdev-dpdk.c
packet-journey/app/kni.c
pktgen-dpdk/app/pktgen-port-cfg.c
pktgen-dpdk/app/pktgen-port-cfg.h
pktgen-dpdk/app/pktgen-stats.c
Trex/src/dpdk_funcs.c
Trex/src/drivers/trex_i40e_fdir.c
Trex/src/drivers/trex_ixgbe_fdir.c
TungstenFabric-vRouter/gdb/vr_dpdk.gdb
I did not check all projects for their uses of rte_eth_devices, but I
did the job for OVS.
If you have cycles to review...
https://patchwork.ozlabs.org/project/openvswitch/patch/20210907082343.16370-1-david.marchand@redhat.com/
One nit:
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 6 +
> drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
> drivers/net/cxgbe/base/adapter.h | 2 +-
> drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
> drivers/net/netvsc/hn_var.h | 1 +
> lib/ethdev/ethdev_driver.h | 149 ++++++++++++++++++
> lib/ethdev/rte_ethdev_core.h | 143 -----------------
> lib/ethdev/version.map | 2 +-
> lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
> lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
> lib/eventdev/rte_eventdev.c | 2 +-
> lib/metrics/rte_metrics_telemetry.c | 2 +-
> 13 files changed, 165 insertions(+), 152 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 6055551443..2944149943 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -228,6 +228,12 @@ ABI Changes
> to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
> is used by public inline function ``rte_eth_rx_queue_count``.
>
> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
> + private data structures. ``rte_eth_devices[]`` can't be accessible directly
accessed*
> + by user any more. While it is an ABI breakage, this change is intended
> + to be transparent for both users (no changes in user app is required) and
> + PMD developers (no changes in PMD is required).
> +
>
> Known Issues
> ------------
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
2021-10-04 13:56 2% ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-05 9:54 0% ` David Marchand
2021-10-05 10:13 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-05 9:54 UTC (permalink / raw)
To: Konstantin Ananyev
Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
Jayatheerthan, Jay
On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> Rework 'fast' burst functions to use rte_eth_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> One extra thing to note - RX/TX callback invocation will cause extra
> function call with these changes. That might cause some insignificant
> slowdown for code-path where RX/TX callbacks are heavily involved.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> lib/ethdev/ethdev_private.c | 31 +++++
> lib/ethdev/rte_ethdev.h | 242 ++++++++++++++++++++++++++----------
> lib/ethdev/version.map | 5 +
> 3 files changed, 210 insertions(+), 68 deletions(-)
>
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 3eeda6e9f9..27d29b2ac6 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> fpo->txq.data = dev->data->tx_queues;
> fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> }
> +
> +uint16_t
> +__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
> + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> + void *opaque)
> +{
> + const struct rte_eth_rxtx_callback *cb = opaque;
> +
> + while (cb != NULL) {
> + nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> + nb_pkts, cb->param);
> + cb = cb->next;
> + }
> +
> + return nb_rx;
> +}
This helper name is ambiguous.
Maybe the intent was to have a generic place holder for updates in
future releases.
But in this series, __rte_eth_rx_epilog is invoked only if a rx
callback is registered, under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.
I'd prefer we call it a spade, i.e. rte_eth_call_rx_callbacks, and it
does not need to be advertised as internal.
> +
> +uint16_t
> +__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
> + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> +{
> + const struct rte_eth_rxtx_callback *cb = opaque;
> +
> + while (cb != NULL) {
> + nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> + cb->param);
> + cb = cb->next;
> + }
> +
> + return nb_pkts;
> +}
Idem, rte_eth_call_tx_callbacks.
--
David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] sort symbols map
@ 2021-10-05 9:16 4% David Marchand
2021-10-05 14:16 0% ` Kinsella, Ray
2021-10-11 11:36 0% ` Dumitrescu, Cristian
0 siblings, 2 replies; 200+ results
From: David Marchand @ 2021-10-05 9:16 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, Ray Kinsella, Jasvinder Singh,
Cristian Dumitrescu, Vladimir Medvedkin, Conor Walsh,
Stephen Hemminger
Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
Fixes: e73a7ab22422 ("net/softnic: promote manage API")
Fixes: 8f532a34c4f2 ("fib: promote API to stable")
Fixes: 4aeb92396b85 ("rib: promote API to stable")
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
I added "./devtools/update-abi.sh $(cat ABI_VERSION)" to my checks.
I should have caught it when merging fib and rib patches...
But my eyes (or more likely brain) stopped at net/softnic bits.
What do you think?
Should I wait a bit more and send a global patch to catch any missed
sorting just before rc1?
In the meantime, if you merge .map updates, try to remember to run the
command above.
Thanks.
---
drivers/net/softnic/version.map | 2 +-
lib/fib/version.map | 21 ++++++++++-----------
lib/rib/version.map | 33 ++++++++++++++++-----------------
3 files changed, 27 insertions(+), 29 deletions(-)
diff --git a/drivers/net/softnic/version.map b/drivers/net/softnic/version.map
index cd5afcf155..01e1514276 100644
--- a/drivers/net/softnic/version.map
+++ b/drivers/net/softnic/version.map
@@ -1,8 +1,8 @@
DPDK_22 {
global:
- rte_pmd_softnic_run;
rte_pmd_softnic_manage;
+ rte_pmd_softnic_run;
local: *;
};
diff --git a/lib/fib/version.map b/lib/fib/version.map
index af76add2b9..b23fa42b9b 100644
--- a/lib/fib/version.map
+++ b/lib/fib/version.map
@@ -1,25 +1,24 @@
DPDK_22 {
global:
- rte_fib_add;
- rte_fib_create;
- rte_fib_delete;
- rte_fib_find_existing;
- rte_fib_free;
- rte_fib_lookup_bulk;
- rte_fib_get_dp;
- rte_fib_get_rib;
- rte_fib_select_lookup;
-
rte_fib6_add;
rte_fib6_create;
rte_fib6_delete;
rte_fib6_find_existing;
rte_fib6_free;
- rte_fib6_lookup_bulk;
rte_fib6_get_dp;
rte_fib6_get_rib;
+ rte_fib6_lookup_bulk;
rte_fib6_select_lookup;
+ rte_fib_add;
+ rte_fib_create;
+ rte_fib_delete;
+ rte_fib_find_existing;
+ rte_fib_free;
+ rte_fib_get_dp;
+ rte_fib_get_rib;
+ rte_fib_lookup_bulk;
+ rte_fib_select_lookup;
local: *;
};
diff --git a/lib/rib/version.map b/lib/rib/version.map
index 6eb1252acb..f356fe8849 100644
--- a/lib/rib/version.map
+++ b/lib/rib/version.map
@@ -1,21 +1,6 @@
DPDK_22 {
global:
- rte_rib_create;
- rte_rib_find_existing;
- rte_rib_free;
- rte_rib_get_depth;
- rte_rib_get_ext;
- rte_rib_get_ip;
- rte_rib_get_nh;
- rte_rib_get_nxt;
- rte_rib_insert;
- rte_rib_lookup;
- rte_rib_lookup_parent;
- rte_rib_lookup_exact;
- rte_rib_set_nh;
- rte_rib_remove;
-
rte_rib6_create;
rte_rib6_find_existing;
rte_rib6_free;
@@ -26,10 +11,24 @@ DPDK_22 {
rte_rib6_get_nxt;
rte_rib6_insert;
rte_rib6_lookup;
- rte_rib6_lookup_parent;
rte_rib6_lookup_exact;
- rte_rib6_set_nh;
+ rte_rib6_lookup_parent;
rte_rib6_remove;
+ rte_rib6_set_nh;
+ rte_rib_create;
+ rte_rib_find_existing;
+ rte_rib_free;
+ rte_rib_get_depth;
+ rte_rib_get_ext;
+ rte_rib_get_ip;
+ rte_rib_get_nh;
+ rte_rib_get_nxt;
+ rte_rib_insert;
+ rte_rib_lookup;
+ rte_rib_lookup_exact;
+ rte_rib_lookup_parent;
+ rte_rib_remove;
+ rte_rib_set_nh;
local: *;
};
--
2.23.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 2/2] cmdline: make struct rdline opaque
2021-10-05 0:55 4% ` [dpdk-dev] [PATCH v3 " Dmitry Kozlyuk
2021-10-05 0:55 4% ` [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
@ 2021-10-05 0:55 3% ` Dmitry Kozlyuk
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 0:55 UTC (permalink / raw)
To: dev
Cc: David Marchand, Dmitry Kozlyuk, Ali Alnubani, Gregory Etelson,
Olivier Matz, Ray Kinsella
Hide struct rdline definition and some RDLINE_* constants in order
to be able to change internal buffer sizes transparently to the user.
Add new functions:
* rdline_create(): allocate and initialize struct rdline.
This function replaces rdline_init() and takes an extra parameter:
opaque user data for the callbacks.
* rdline_free(): deallocate struct rdline.
* rdline_get_history_buffer_size(): for use in tests.
* rdline_get_opaque(): to obtain user data in callback functions.
Remove rdline_init() function from library headers and export list,
because using it requires the knowledge of sizeof(struct rdline).
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
app/test-cmdline/commands.c | 2 +-
app/test/test_cmdline_lib.c | 19 ++++--
doc/guides/rel_notes/release_21_11.rst | 3 +
lib/cmdline/cmdline.c | 3 +-
lib/cmdline/cmdline_cirbuf.c | 1 -
lib/cmdline/cmdline_private.h | 49 ++++++++++++++
lib/cmdline/cmdline_rdline.c | 50 ++++++++++++++-
lib/cmdline/cmdline_rdline.h | 88 ++++++++++----------------
lib/cmdline/version.map | 8 ++-
9 files changed, 154 insertions(+), 69 deletions(-)
diff --git a/app/test-cmdline/commands.c b/app/test-cmdline/commands.c
index d732976f08..a13e1d1afd 100644
--- a/app/test-cmdline/commands.c
+++ b/app/test-cmdline/commands.c
@@ -297,7 +297,7 @@ cmd_get_history_bufsize_parsed(__rte_unused void *parsed_result,
struct rdline *rdl = cmdline_get_rdline(cl);
cmdline_printf(cl, "History buffer size: %zu\n",
- sizeof(rdl->history_buf));
+ rdline_get_history_buffer_size(rdl));
}
cmdline_parse_token_string_t cmd_get_history_bufsize_tok =
diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c
index d5a09b4541..367dcad5be 100644
--- a/app/test/test_cmdline_lib.c
+++ b/app/test/test_cmdline_lib.c
@@ -83,18 +83,18 @@ test_cmdline_parse_fns(void)
static int
test_cmdline_rdline_fns(void)
{
- struct rdline rdl;
+ struct rdline *rdl = NULL;
rdline_write_char_t *wc = &cmdline_write_char;
rdline_validate_t *v = &valid_buffer;
rdline_complete_t *c = &complete_buffer;
- if (rdline_init(NULL, wc, v, c) >= 0)
+ if (rdline_create(NULL, wc, v, c, NULL) >= 0)
goto error;
- if (rdline_init(&rdl, NULL, v, c) >= 0)
+ if (rdline_create(&rdl, NULL, v, c, NULL) >= 0)
goto error;
- if (rdline_init(&rdl, wc, NULL, c) >= 0)
+ if (rdline_create(&rdl, wc, NULL, c, NULL) >= 0)
goto error;
- if (rdline_init(&rdl, wc, v, NULL) >= 0)
+ if (rdline_create(&rdl, wc, v, NULL, NULL) >= 0)
goto error;
if (rdline_char_in(NULL, 0) >= 0)
goto error;
@@ -102,25 +102,30 @@ test_cmdline_rdline_fns(void)
goto error;
if (rdline_add_history(NULL, "history") >= 0)
goto error;
- if (rdline_add_history(&rdl, NULL) >= 0)
+ if (rdline_add_history(rdl, NULL) >= 0)
goto error;
if (rdline_get_history_item(NULL, 0) != NULL)
goto error;
/* void functions */
+ rdline_get_history_buffer_size(NULL);
+ rdline_get_opaque(NULL);
rdline_newline(NULL, "prompt");
- rdline_newline(&rdl, NULL);
+ rdline_newline(rdl, NULL);
rdline_stop(NULL);
rdline_quit(NULL);
rdline_restart(NULL);
rdline_redisplay(NULL);
rdline_reset(NULL);
rdline_clear_history(NULL);
+ rdline_free(NULL);
+ rdline_free(rdl);
return 0;
error:
printf("Error: function accepted null parameter!\n");
+ rdline_free(rdl);
return -1;
}
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 18377e5813..af11f4a656 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -103,6 +103,9 @@ API Changes
* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+* cmdline: Made ``rdline`` structure definition hidden. Functions are added
+ to dynamically allocate and free it, and to access user data in callbacks.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline.c b/lib/cmdline/cmdline.c
index a176d15130..8f1854cb0b 100644
--- a/lib/cmdline/cmdline.c
+++ b/lib/cmdline/cmdline.c
@@ -85,13 +85,12 @@ cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
cl->ctx = ctx;
ret = rdline_init(&cl->rdl, cmdline_write_char, cmdline_valid_buffer,
- cmdline_complete_buffer);
+ cmdline_complete_buffer, cl);
if (ret != 0) {
free(cl);
return NULL;
}
- cl->rdl.opaque = cl;
cmdline_set_prompt(cl, prompt);
rdline_newline(&cl->rdl, cl->prompt);
diff --git a/lib/cmdline/cmdline_cirbuf.c b/lib/cmdline/cmdline_cirbuf.c
index 829a8af563..cbb76a7016 100644
--- a/lib/cmdline/cmdline_cirbuf.c
+++ b/lib/cmdline/cmdline_cirbuf.c
@@ -10,7 +10,6 @@
#include "cmdline_cirbuf.h"
-
int
cirbuf_init(struct cirbuf *cbuf, char *buf, unsigned int start, unsigned int maxlen)
{
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index 2e93674c66..c2e906d8de 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -17,6 +17,49 @@
#include <cmdline.h>
+#define RDLINE_BUF_SIZE 512
+#define RDLINE_PROMPT_SIZE 32
+#define RDLINE_VT100_BUF_SIZE 8
+#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
+#define RDLINE_HISTORY_MAX_LINE 64
+
+enum rdline_status {
+ RDLINE_INIT,
+ RDLINE_RUNNING,
+ RDLINE_EXITED
+};
+
+struct rdline {
+ enum rdline_status status;
+ /* rdline bufs */
+ struct cirbuf left;
+ struct cirbuf right;
+ char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
+ char right_buf[RDLINE_BUF_SIZE];
+
+ char prompt[RDLINE_PROMPT_SIZE];
+ unsigned int prompt_size;
+
+ char kill_buf[RDLINE_BUF_SIZE];
+ unsigned int kill_size;
+
+ /* history */
+ struct cirbuf history;
+ char history_buf[RDLINE_HISTORY_BUF_SIZE];
+ int history_cur_line;
+
+ /* callbacks and func pointers */
+ rdline_write_char_t *write_char;
+ rdline_validate_t *validate;
+ rdline_complete_t *complete;
+
+ /* vt100 parser */
+ struct cmdline_vt100 vt100;
+
+ /* opaque pointer */
+ void *opaque;
+};
+
#ifdef RTE_EXEC_ENV_WINDOWS
struct terminal {
DWORD input_mode;
@@ -57,4 +100,10 @@ ssize_t cmdline_read_char(struct cmdline *cl, char *c);
__rte_format_printf(2, 0)
int cmdline_vdprintf(int fd, const char *format, va_list op);
+int rdline_init(struct rdline *rdl,
+ rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque);
+
#endif
diff --git a/lib/cmdline/cmdline_rdline.c b/lib/cmdline/cmdline_rdline.c
index 2cb53e38f2..a525513465 100644
--- a/lib/cmdline/cmdline_rdline.c
+++ b/lib/cmdline/cmdline_rdline.c
@@ -13,6 +13,7 @@
#include <ctype.h>
#include "cmdline_cirbuf.h"
+#include "cmdline_private.h"
#include "cmdline_rdline.h"
static void rdline_puts(struct rdline *rdl, const char *buf);
@@ -37,9 +38,10 @@ isblank2(char c)
int
rdline_init(struct rdline *rdl,
- rdline_write_char_t *write_char,
- rdline_validate_t *validate,
- rdline_complete_t *complete)
+ rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque)
{
if (!rdl || !write_char || !validate || !complete)
return -EINVAL;
@@ -47,10 +49,40 @@ rdline_init(struct rdline *rdl,
rdl->validate = validate;
rdl->complete = complete;
rdl->write_char = write_char;
+ rdl->opaque = opaque;
rdl->status = RDLINE_INIT;
return cirbuf_init(&rdl->history, rdl->history_buf, 0, RDLINE_HISTORY_BUF_SIZE);
}
+int
+rdline_create(struct rdline **out,
+ rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque)
+{
+ struct rdline *rdl;
+ int ret;
+
+ if (out == NULL)
+ return -EINVAL;
+ rdl = malloc(sizeof(*rdl));
+ if (rdl == NULL)
+ return -ENOMEM;
+ ret = rdline_init(rdl, write_char, validate, complete, opaque);
+ if (ret < 0)
+ free(rdl);
+ else
+ *out = rdl;
+ return ret;
+}
+
+void
+rdline_free(struct rdline *rdl)
+{
+ free(rdl);
+}
+
void
rdline_newline(struct rdline *rdl, const char *prompt)
{
@@ -564,6 +596,18 @@ rdline_get_history_item(struct rdline * rdl, unsigned int idx)
return NULL;
}
+size_t
+rdline_get_history_buffer_size(struct rdline *rdl)
+{
+ return sizeof(rdl->history_buf);
+}
+
+void *
+rdline_get_opaque(struct rdline *rdl)
+{
+ return rdl != NULL ? rdl->opaque : NULL;
+}
+
int
rdline_add_history(struct rdline * rdl, const char * buf)
{
diff --git a/lib/cmdline/cmdline_rdline.h b/lib/cmdline/cmdline_rdline.h
index d2170293de..b93e9ea569 100644
--- a/lib/cmdline/cmdline_rdline.h
+++ b/lib/cmdline/cmdline_rdline.h
@@ -10,9 +10,7 @@
/**
* This file is a small equivalent to the GNU readline library, but it
* was originally designed for small systems, like Atmel AVR
- * microcontrollers (8 bits). Indeed, we don't use any malloc that is
- * sometimes not implemented (or just not recommended) on such
- * systems.
+ * microcontrollers (8 bits). It only uses malloc() on object creation.
*
* Obviously, it does not support as many things as the GNU readline,
* but at least it supports some interesting features like a kill
@@ -31,6 +29,7 @@
*/
#include <stdio.h>
+#include <rte_compat.h>
#include <cmdline_cirbuf.h>
#include <cmdline_vt100.h>
@@ -38,19 +37,6 @@
extern "C" {
#endif
-/* configuration */
-#define RDLINE_BUF_SIZE 512
-#define RDLINE_PROMPT_SIZE 32
-#define RDLINE_VT100_BUF_SIZE 8
-#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
-#define RDLINE_HISTORY_MAX_LINE 64
-
-enum rdline_status {
- RDLINE_INIT,
- RDLINE_RUNNING,
- RDLINE_EXITED
-};
-
struct rdline;
typedef int (rdline_write_char_t)(struct rdline *rdl, char);
@@ -60,52 +46,34 @@ typedef int (rdline_complete_t)(struct rdline *rdl, const char *buf,
char *dstbuf, unsigned int dstsize,
int *state);
-struct rdline {
- enum rdline_status status;
- /* rdline bufs */
- struct cirbuf left;
- struct cirbuf right;
- char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
- char right_buf[RDLINE_BUF_SIZE];
-
- char prompt[RDLINE_PROMPT_SIZE];
- unsigned int prompt_size;
-
- char kill_buf[RDLINE_BUF_SIZE];
- unsigned int kill_size;
-
- /* history */
- struct cirbuf history;
- char history_buf[RDLINE_HISTORY_BUF_SIZE];
- int history_cur_line;
-
- /* callbacks and func pointers */
- rdline_write_char_t *write_char;
- rdline_validate_t *validate;
- rdline_complete_t *complete;
-
- /* vt100 parser */
- struct cmdline_vt100 vt100;
-
- /* opaque pointer */
- void *opaque;
-};
-
/**
- * Init fields for a struct rdline. Call this only once at the beginning
- * of your program.
- * \param rdl A pointer to an uninitialized struct rdline
+ * Allocate and initialize a new rdline instance.
+ *
+ * \param rdl Receives a pointer to the allocated structure.
* \param write_char The function used by the function to write a character
* \param validate A pointer to the function to execute when the
* user validates the buffer.
* \param complete A pointer to the function to execute when the
* user completes the buffer.
+ * \param opaque User data for use in the callbacks.
+ *
+ * \return 0 on success, negative errno-style code in failure.
*/
-int rdline_init(struct rdline *rdl,
- rdline_write_char_t *write_char,
- rdline_validate_t *validate,
- rdline_complete_t *complete);
+__rte_experimental
+int rdline_create(struct rdline **rdl,
+ rdline_write_char_t *write_char,
+ rdline_validate_t *validate,
+ rdline_complete_t *complete,
+ void *opaque);
+/**
+ * Free an rdline instance.
+ *
+ * \param rdl A pointer to an initialized struct rdline.
+ * If NULL, this function is a no-op.
+ */
+__rte_experimental
+void rdline_free(struct rdline *rdl);
/**
* Init the current buffer, and display a prompt.
@@ -194,6 +162,18 @@ void rdline_clear_history(struct rdline *rdl);
*/
char *rdline_get_history_item(struct rdline *rdl, unsigned int i);
+/**
+ * Get maximum history buffer size.
+ */
+__rte_experimental
+size_t rdline_get_history_buffer_size(struct rdline *rdl);
+
+/**
+ * Get the opaque pointer supplied on struct rdline creation.
+ */
+__rte_experimental
+void *rdline_get_opaque(struct rdline *rdl);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/cmdline/version.map b/lib/cmdline/version.map
index 980adb4f23..13feb4d0a5 100644
--- a/lib/cmdline/version.map
+++ b/lib/cmdline/version.map
@@ -57,7 +57,6 @@ DPDK_22 {
rdline_clear_history;
rdline_get_buffer;
rdline_get_history_item;
- rdline_init;
rdline_newline;
rdline_quit;
rdline_redisplay;
@@ -73,7 +72,14 @@ DPDK_22 {
EXPERIMENTAL {
global:
+ # added in 20.11
cmdline_get_rdline;
+ # added in 21.11
+ rdline_create;
+ rdline_free;
+ rdline_get_history_buffer_size;
+ rdline_get_opaque;
+
local: *;
};
--
2.29.3
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque
2021-10-05 0:55 4% ` [dpdk-dev] [PATCH v3 " Dmitry Kozlyuk
@ 2021-10-05 0:55 4% ` Dmitry Kozlyuk
2021-10-05 0:55 3% ` [dpdk-dev] [PATCH v3 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 0:55 UTC (permalink / raw)
To: dev; +Cc: David Marchand, Dmitry Kozlyuk, Olivier Matz, Ray Kinsella
Remove the definition of `struct cmdline` from public header.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2020-September/183310.html
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_21_11.rst | 2 ++
lib/cmdline/cmdline.h | 19 -------------------
lib/cmdline/cmdline_private.h | 8 +++++++-
4 files changed, 9 insertions(+), 24 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..a404276fa2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,10 +275,6 @@ Deprecation Notices
* metrics: The function ``rte_metrics_init`` will have a non-void return
in order to notify errors instead of calling ``rte_exit``.
-* cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
- content. On Linux and FreeBSD, supported prior to DPDK 20.11,
- original structure will be kept until DPDK 21.11.
-
* security: The functions ``rte_security_set_pkt_metadata`` and
``rte_security_get_userdata`` will be made inline functions and additional
flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b55900936d..18377e5813 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -101,6 +101,8 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline.h b/lib/cmdline/cmdline.h
index c29762ddae..96674dfda2 100644
--- a/lib/cmdline/cmdline.h
+++ b/lib/cmdline/cmdline.h
@@ -7,10 +7,6 @@
#ifndef _CMDLINE_H_
#define _CMDLINE_H_
-#ifndef RTE_EXEC_ENV_WINDOWS
-#include <termios.h>
-#endif
-
#include <rte_common.h>
#include <rte_compat.h>
@@ -27,23 +23,8 @@
extern "C" {
#endif
-#ifndef RTE_EXEC_ENV_WINDOWS
-
-struct cmdline {
- int s_in;
- int s_out;
- cmdline_parse_ctx_t *ctx;
- struct rdline rdl;
- char prompt[RDLINE_PROMPT_SIZE];
- struct termios oldterm;
-};
-
-#else
-
struct cmdline;
-#endif /* RTE_EXEC_ENV_WINDOWS */
-
struct cmdline *cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out);
void cmdline_set_prompt(struct cmdline *cl, const char *prompt);
void cmdline_free(struct cmdline *cl);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index a87c45275c..2e93674c66 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -11,6 +11,8 @@
#include <rte_os_shim.h>
#ifdef RTE_EXEC_ENV_WINDOWS
#include <rte_windows.h>
+#else
+#include <termios.h>
#endif
#include <cmdline.h>
@@ -22,6 +24,7 @@ struct terminal {
int is_console_input;
int is_console_output;
};
+#endif
struct cmdline {
int s_in;
@@ -29,11 +32,14 @@ struct cmdline {
cmdline_parse_ctx_t *ctx;
struct rdline rdl;
char prompt[RDLINE_PROMPT_SIZE];
+#ifdef RTE_EXEC_ENV_WINDOWS
struct terminal oldterm;
char repeated_char;
WORD repeat_count;
-};
+#else
+ struct termios oldterm;
#endif
+};
/* Disable buffering and echoing, save previous settings to oldterm. */
void terminal_adjust(struct cmdline *cl);
--
2.29.3
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 0/2] cmdline: reduce ABI
@ 2021-10-05 0:55 4% ` Dmitry Kozlyuk
2021-10-05 0:55 4% ` [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 0:55 UTC (permalink / raw)
To: dev; +Cc: David Marchand, Dmitry Kozlyuk
Hide struct cmdline following the deprecation notice.
Hide struct rdline following the v1 discussion.
v3: add experimental tags and releae notes for rdline.
v2: also hide struct rdline (David, Olivier).
Dmitry Kozlyuk (2):
cmdline: make struct cmdline opaque
cmdline: make struct rdline opaque
app/test-cmdline/commands.c | 2 +-
app/test/test_cmdline_lib.c | 19 ++++--
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_21_11.rst | 5 ++
lib/cmdline/cmdline.c | 3 +-
lib/cmdline/cmdline.h | 19 ------
lib/cmdline/cmdline_cirbuf.c | 1 -
lib/cmdline/cmdline_private.h | 57 ++++++++++++++++-
lib/cmdline/cmdline_rdline.c | 50 ++++++++++++++-
lib/cmdline/cmdline_rdline.h | 88 ++++++++++----------------
lib/cmdline/version.map | 8 ++-
11 files changed, 163 insertions(+), 93 deletions(-)
--
2.29.3
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 3/3] cryptodev: rework session framework
@ 2021-10-04 19:07 0% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-04 19:07 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
De Lara Guarch, Pablo, Trahe, Fiona, Doherty, Declan, matan,
g.singh, jianjay.zhou, asomalap, ruifeng.wang, Ananyev,
Konstantin, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
Ankur Dwivedi, Power, Ciara
Hi Fan,
> Hi Akhil,
>
> Your patch failed all QAT tests - maybe it will fail on all PMDs requiring to
> know the session's physical address (such as nitrox, caam).
>
> The reason is QAT PMD driver cannot know the physical address of the
> session private data passed into it - as the private data is computed by
> rte_cryptodev_sym_session_init() from the start of the session (which has a
> mempool obj header with valid physical address) to the offset for that driver
> type. The session private data pointer - although is a valid buffer - does not
> have the obj header that contains the physical address.
>
> I think the solution can be - instead of passing the session private data, the
> rte_cryptodev_sym_session pointer should be passed and a macro shall be
> provided to the drivers to manually compute the offset and then find the
> correct session private data from the session for itself.
Thanks for trying this patchset.
Instead of passing the rte_cryptodev_sym_session, Can we add another
Argument in the sym_session_configure() to pass physical address of session.
This will reduce the overhead of PMD to get the offset from lib and then call rte_mempool_virt2iova.
sym_session_configure(dev, xforms, sess_priv, sess_priv_iova)
The motive for above change: Since the mempool is not exposed to PMD,
It does not make sense to call rte_mempool_virt2iova Inside PMD.
What do you suggest?
Regards,
Akhil
>
> Regards,
> Fan
>
> > -----Original Message-----
> > From: Akhil Goyal <gakhil@marvell.com>
> > Sent: Thursday, September 30, 2021 3:50 PM
> > To: dev@dpdk.org
> > Cc: thomas@monjalon.net; david.marchand@redhat.com;
> > hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> > Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> > g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> > jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> > Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> > <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> > rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> > <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> > Subject: [PATCH 3/3] cryptodev: rework session framework
> >
> > As per current design, rte_cryptodev_sym_session_create() and
> > rte_cryptodev_sym_session_init() use separate mempool objects
> > for a single session.
> > And structure rte_cryptodev_sym_session is not directly used
> > by the application, it may cause ABI breakage if the structure
> > is modified in future.
> >
> > To address these two issues, the rte_cryptodev_sym_session_create
> > will take one mempool object for both the session and session
> > private data. The API rte_cryptodev_sym_session_init will now not
> > take mempool object.
> > rte_cryptodev_sym_session_create will now return an opaque session
> > pointer which will be used by the app in rte_cryptodev_sym_session_init
> > and other APIs.
> >
> > With this change, rte_cryptodev_sym_session_init will send
> > pointer to session private data of corresponding driver to the PMD
> > based on the driver_id for filling the PMD data.
> >
> > In data path, opaque session pointer is attached to rte_crypto_op
> > and the PMD can call an internal library API to get the session
> > private data pointer based on the driver id.
> >
> > TODO:
> > - inline APIs for opaque data
> > - move rte_cryptodev_sym_session struct to cryptodev_pmd.h
> > - currently nb_drivers are getting updated in RTE_INIT which
> > result in increasing the memory requirements for session.
> > This will be moved to PMD probe so that memory is created
> > only for those PMDs which are probed and not just compiled in.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > app/test-crypto-perf/cperf.h | 1 -
> > app/test-crypto-perf/cperf_ops.c | 33 ++---
> > app/test-crypto-perf/cperf_ops.h | 6 +-
> > app/test-crypto-perf/cperf_test_latency.c | 5 +-
> > app/test-crypto-perf/cperf_test_latency.h | 1 -
> > .../cperf_test_pmd_cyclecount.c | 5 +-
> > .../cperf_test_pmd_cyclecount.h | 1 -
> > app/test-crypto-perf/cperf_test_throughput.c | 5 +-
> > app/test-crypto-perf/cperf_test_throughput.h | 1 -
> > app/test-crypto-perf/cperf_test_verify.c | 5 +-
> > app/test-crypto-perf/cperf_test_verify.h | 1 -
> > app/test-crypto-perf/main.c | 29 +---
> > app/test/test_cryptodev.c | 130 +++++-------------
> > app/test/test_cryptodev.h | 1 -
> > app/test/test_cryptodev_asym.c | 1 -
> > app/test/test_cryptodev_blockcipher.c | 6 +-
> > app/test/test_event_crypto_adapter.c | 28 +---
> > app/test/test_ipsec.c | 22 +--
> > drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 33 +----
> > .../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 34 +----
> > drivers/crypto/armv8/rte_armv8_pmd_ops.c | 34 +----
> > drivers/crypto/bcmfs/bcmfs_sym_session.c | 36 +----
> > drivers/crypto/bcmfs/bcmfs_sym_session.h | 6 +-
> > drivers/crypto/caam_jr/caam_jr.c | 32 ++---
> > drivers/crypto/ccp/ccp_pmd_ops.c | 32 +----
> > drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 18 ++-
> > drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 18 +--
> > drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 61 +++-----
> > drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 13 +-
> > drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 29 +---
> > drivers/crypto/dpaa_sec/dpaa_sec.c | 31 +----
> > drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 34 +----
> > drivers/crypto/mlx5/mlx5_crypto.c | 24 +---
> > drivers/crypto/mvsam/rte_mrvl_pmd_ops.c | 36 ++---
> > drivers/crypto/nitrox/nitrox_sym.c | 31 +----
> > drivers/crypto/null/null_crypto_pmd_ops.c | 34 +----
> > .../crypto/octeontx/otx_cryptodev_hw_access.h | 1 -
> > drivers/crypto/octeontx/otx_cryptodev_ops.c | 60 +++-----
> > drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 52 +++----
> > .../octeontx2/otx2_cryptodev_ops_helper.h | 16 +--
> > drivers/crypto/openssl/rte_openssl_pmd_ops.c | 35 +----
> > drivers/crypto/qat/qat_sym_session.c | 29 +---
> > drivers/crypto/qat/qat_sym_session.h | 6 +-
> > drivers/crypto/scheduler/scheduler_pmd_ops.c | 9 +-
> > drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 34 +----
> > drivers/crypto/virtio/virtio_cryptodev.c | 31 ++---
> > drivers/crypto/zuc/rte_zuc_pmd_ops.c | 35 +----
> > .../octeontx2/otx2_evdev_crypto_adptr_rx.h | 3 +-
> > examples/fips_validation/fips_dev_self_test.c | 32 ++---
> > examples/fips_validation/main.c | 20 +--
> > examples/ipsec-secgw/ipsec-secgw.c | 72 +++++-----
> > examples/ipsec-secgw/ipsec.c | 3 +-
> > examples/ipsec-secgw/ipsec.h | 1 -
> > examples/ipsec-secgw/ipsec_worker.c | 4 -
> > examples/l2fwd-crypto/main.c | 41 +-----
> > examples/vhost_crypto/main.c | 16 +--
> > lib/cryptodev/cryptodev_pmd.h | 7 +-
> > lib/cryptodev/rte_crypto.h | 2 +-
> > lib/cryptodev/rte_crypto_sym.h | 2 +-
> > lib/cryptodev/rte_cryptodev.c | 73 ++++++----
> > lib/cryptodev/rte_cryptodev.h | 23 ++--
> > lib/cryptodev/rte_cryptodev_trace.h | 5 +-
> > lib/pipeline/rte_table_action.c | 8 +-
> > lib/pipeline/rte_table_action.h | 2 +-
> > lib/vhost/rte_vhost_crypto.h | 3 -
> > lib/vhost/vhost_crypto.c | 7 +-
> > 66 files changed, 399 insertions(+), 1050 deletions(-)
> >
> > diff --git a/app/test-crypto-perf/cperf.h b/app/test-crypto-perf/cperf.h
> > index 2b0aad095c..db58228dce 100644
> > --- a/app/test-crypto-perf/cperf.h
> > +++ b/app/test-crypto-perf/cperf.h
> > @@ -15,7 +15,6 @@ struct cperf_op_fns;
> >
> > typedef void *(*cperf_constructor_t)(
> > struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id,
> > uint16_t qp_id,
> > const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-
> > perf/cperf_ops.c
> > index 1b3cbe77b9..f094bc656d 100644
> > --- a/app/test-crypto-perf/cperf_ops.c
> > +++ b/app/test-crypto-perf/cperf_ops.c
> > @@ -12,7 +12,7 @@ static int
> > cperf_set_ops_asym(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset __rte_unused,
> > uint32_t dst_buf_offset __rte_unused, uint16_t nb_ops,
> > - struct rte_cryptodev_sym_session *sess,
> > + void *sess,
> > const struct cperf_options *options __rte_unused,
> > const struct cperf_test_vector *test_vector __rte_unused,
> > uint16_t iv_offset __rte_unused,
> > @@ -40,7 +40,7 @@ static int
> > cperf_set_ops_security(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset __rte_unused,
> > uint32_t dst_buf_offset __rte_unused,
> > - uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > + uint16_t nb_ops, void *sess,
> > const struct cperf_options *options __rte_unused,
> > const struct cperf_test_vector *test_vector __rte_unused,
> > uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> > @@ -106,7 +106,7 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
> > static int
> > cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > - uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > + uint16_t nb_ops, void *sess,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector __rte_unused,
> > uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> > @@ -145,7 +145,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op
> > **ops,
> > static int
> > cperf_set_ops_null_auth(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > - uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > + uint16_t nb_ops, void *sess,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector __rte_unused,
> > uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> > @@ -184,7 +184,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op
> **ops,
> > static int
> > cperf_set_ops_cipher(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > - uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > + uint16_t nb_ops, void *sess,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > uint16_t iv_offset, uint32_t *imix_idx)
> > @@ -240,7 +240,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
> > static int
> > cperf_set_ops_auth(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > - uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > + uint16_t nb_ops, void *sess,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > uint16_t iv_offset, uint32_t *imix_idx)
> > @@ -340,7 +340,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
> > static int
> > cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > - uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > + uint16_t nb_ops, void *sess,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > uint16_t iv_offset, uint32_t *imix_idx)
> > @@ -455,7 +455,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op
> > **ops,
> > static int
> > cperf_set_ops_aead(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > - uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > + uint16_t nb_ops, void *sess,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > uint16_t iv_offset, uint32_t *imix_idx)
> > @@ -563,9 +563,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
> > return 0;
> > }
> >
> > -static struct rte_cryptodev_sym_session *
> > +static void *
> > cperf_create_session(struct rte_mempool *sess_mp,
> > - struct rte_mempool *priv_mp,
> > uint8_t dev_id,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > @@ -590,7 +589,7 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
> > if (sess == NULL)
> > return NULL;
> > rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
> > - &xform, priv_mp);
> > + &xform, sess_mp);
> > if (rc < 0) {
> > if (sess != NULL) {
> > rte_cryptodev_asym_session_clear(dev_id,
> > @@ -742,8 +741,7 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
> > cipher_xform.cipher.iv.length = 0;
> > }
> > /* create crypto session */
> > - rte_cryptodev_sym_session_init(dev_id, sess,
> > &cipher_xform,
> > - priv_mp);
> > + rte_cryptodev_sym_session_init(dev_id, sess,
> > &cipher_xform);
> > /*
> > * auth only
> > */
> > @@ -770,8 +768,7 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
> > auth_xform.auth.iv.length = 0;
> > }
> > /* create crypto session */
> > - rte_cryptodev_sym_session_init(dev_id, sess, &auth_xform,
> > - priv_mp);
> > + rte_cryptodev_sym_session_init(dev_id, sess,
> > &auth_xform);
> > /*
> > * cipher and auth
> > */
> > @@ -830,12 +827,12 @@ cperf_create_session(struct rte_mempool
> > *sess_mp,
> > cipher_xform.next = &auth_xform;
> > /* create crypto session */
> > rte_cryptodev_sym_session_init(dev_id,
> > - sess, &cipher_xform, priv_mp);
> > + sess, &cipher_xform);
> > } else { /* auth then cipher */
> > auth_xform.next = &cipher_xform;
> > /* create crypto session */
> > rte_cryptodev_sym_session_init(dev_id,
> > - sess, &auth_xform, priv_mp);
> > + sess, &auth_xform);
> > }
> > } else { /* options->op_type == CPERF_AEAD */
> > aead_xform.type = RTE_CRYPTO_SYM_XFORM_AEAD;
> > @@ -856,7 +853,7 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
> >
> > /* Create crypto session */
> > rte_cryptodev_sym_session_init(dev_id,
> > - sess, &aead_xform, priv_mp);
> > + sess, &aead_xform);
> > }
> >
> > return sess;
> > diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-
> > perf/cperf_ops.h
> > index ff125d12cd..3ff10491a0 100644
> > --- a/app/test-crypto-perf/cperf_ops.h
> > +++ b/app/test-crypto-perf/cperf_ops.h
> > @@ -12,15 +12,15 @@
> > #include "cperf_test_vectors.h"
> >
> >
> > -typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
> > - struct rte_mempool *sess_mp, struct rte_mempool
> > *sess_priv_mp,
> > +typedef void *(*cperf_sessions_create_t)(
> > + struct rte_mempool *sess_mp,
> > uint8_t dev_id, const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > uint16_t iv_offset);
> >
> > typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
> > uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > - uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > + uint16_t nb_ops, void *sess,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > uint16_t iv_offset, uint32_t *imix_idx);
> > diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-
> > perf/cperf_test_latency.c
> > index 159fe8492b..4193f7e777 100644
> > --- a/app/test-crypto-perf/cperf_test_latency.c
> > +++ b/app/test-crypto-perf/cperf_test_latency.c
> > @@ -24,7 +24,7 @@ struct cperf_latency_ctx {
> >
> > struct rte_mempool *pool;
> >
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> >
> > cperf_populate_ops_t populate_ops;
> >
> > @@ -59,7 +59,6 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx)
> >
> > void *
> > cperf_latency_test_constructor(struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id, uint16_t qp_id,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > @@ -84,7 +83,7 @@ cperf_latency_test_constructor(struct rte_mempool
> > *sess_mp,
> > sizeof(struct rte_crypto_sym_op) +
> > sizeof(struct cperf_op_result *);
> >
> > - ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> > options,
> > + ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
> > test_vector, iv_offset);
> > if (ctx->sess == NULL)
> > goto err;
> > diff --git a/app/test-crypto-perf/cperf_test_latency.h b/app/test-crypto-
> > perf/cperf_test_latency.h
> > index ed5b0a07bb..d3fc3218d7 100644
> > --- a/app/test-crypto-perf/cperf_test_latency.h
> > +++ b/app/test-crypto-perf/cperf_test_latency.h
> > @@ -17,7 +17,6 @@
> > void *
> > cperf_latency_test_constructor(
> > struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id,
> > uint16_t qp_id,
> > const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-
> > crypto-perf/cperf_test_pmd_cyclecount.c
> > index cbbbedd9ba..3dd489376f 100644
> > --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
> > +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
> > @@ -27,7 +27,7 @@ struct cperf_pmd_cyclecount_ctx {
> > struct rte_crypto_op **ops;
> > struct rte_crypto_op **ops_processed;
> >
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> >
> > cperf_populate_ops_t populate_ops;
> >
> > @@ -93,7 +93,6 @@ cperf_pmd_cyclecount_test_free(struct
> > cperf_pmd_cyclecount_ctx *ctx)
> >
> > void *
> > cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id, uint16_t qp_id,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > @@ -120,7 +119,7 @@ cperf_pmd_cyclecount_test_constructor(struct
> > rte_mempool *sess_mp,
> > uint16_t iv_offset = sizeof(struct rte_crypto_op) +
> > sizeof(struct rte_crypto_sym_op);
> >
> > - ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> > options,
> > + ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
> > test_vector, iv_offset);
> > if (ctx->sess == NULL)
> > goto err;
> > diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h b/app/test-
> > crypto-perf/cperf_test_pmd_cyclecount.h
> > index 3084038a18..beb4419910 100644
> > --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
> > +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
> > @@ -18,7 +18,6 @@
> > void *
> > cperf_pmd_cyclecount_test_constructor(
> > struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id,
> > uint16_t qp_id,
> > const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-
> crypto-
> > perf/cperf_test_throughput.c
> > index 76fcda47ff..dc5c48b4da 100644
> > --- a/app/test-crypto-perf/cperf_test_throughput.c
> > +++ b/app/test-crypto-perf/cperf_test_throughput.c
> > @@ -18,7 +18,7 @@ struct cperf_throughput_ctx {
> >
> > struct rte_mempool *pool;
> >
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> >
> > cperf_populate_ops_t populate_ops;
> >
> > @@ -64,7 +64,6 @@ cperf_throughput_test_free(struct
> > cperf_throughput_ctx *ctx)
> >
> > void *
> > cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id, uint16_t qp_id,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > @@ -87,7 +86,7 @@ cperf_throughput_test_constructor(struct
> > rte_mempool *sess_mp,
> > uint16_t iv_offset = sizeof(struct rte_crypto_op) +
> > sizeof(struct rte_crypto_sym_op);
> >
> > - ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> > options,
> > + ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
> > test_vector, iv_offset);
> > if (ctx->sess == NULL)
> > goto err;
> > diff --git a/app/test-crypto-perf/cperf_test_throughput.h b/app/test-
> > crypto-perf/cperf_test_throughput.h
> > index 91e1a4b708..439ec8e559 100644
> > --- a/app/test-crypto-perf/cperf_test_throughput.h
> > +++ b/app/test-crypto-perf/cperf_test_throughput.h
> > @@ -18,7 +18,6 @@
> > void *
> > cperf_throughput_test_constructor(
> > struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id,
> > uint16_t qp_id,
> > const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-
> > perf/cperf_test_verify.c
> > index 2939aeaa93..cf561dd700 100644
> > --- a/app/test-crypto-perf/cperf_test_verify.c
> > +++ b/app/test-crypto-perf/cperf_test_verify.c
> > @@ -18,7 +18,7 @@ struct cperf_verify_ctx {
> >
> > struct rte_mempool *pool;
> >
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> >
> > cperf_populate_ops_t populate_ops;
> >
> > @@ -51,7 +51,6 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx)
> >
> > void *
> > cperf_verify_test_constructor(struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id, uint16_t qp_id,
> > const struct cperf_options *options,
> > const struct cperf_test_vector *test_vector,
> > @@ -74,7 +73,7 @@ cperf_verify_test_constructor(struct rte_mempool
> > *sess_mp,
> > uint16_t iv_offset = sizeof(struct rte_crypto_op) +
> > sizeof(struct rte_crypto_sym_op);
> >
> > - ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> > options,
> > + ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
> > test_vector, iv_offset);
> > if (ctx->sess == NULL)
> > goto err;
> > diff --git a/app/test-crypto-perf/cperf_test_verify.h b/app/test-crypto-
> > perf/cperf_test_verify.h
> > index ac2192ba99..9f70ad87ba 100644
> > --- a/app/test-crypto-perf/cperf_test_verify.h
> > +++ b/app/test-crypto-perf/cperf_test_verify.h
> > @@ -18,7 +18,6 @@
> > void *
> > cperf_verify_test_constructor(
> > struct rte_mempool *sess_mp,
> > - struct rte_mempool *sess_priv_mp,
> > uint8_t dev_id,
> > uint16_t qp_id,
> > const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
> > index 390380898e..c3327b7e55 100644
> > --- a/app/test-crypto-perf/main.c
> > +++ b/app/test-crypto-perf/main.c
> > @@ -118,35 +118,14 @@ fill_session_pool_socket(int32_t socket_id,
> > uint32_t session_priv_size,
> > char mp_name[RTE_MEMPOOL_NAMESIZE];
> > struct rte_mempool *sess_mp;
> >
> > - if (session_pool_socket[socket_id].priv_mp == NULL) {
> > - snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > - "priv_sess_mp_%u", socket_id);
> > -
> > - sess_mp = rte_mempool_create(mp_name,
> > - nb_sessions,
> > - session_priv_size,
> > - 0, 0, NULL, NULL, NULL,
> > - NULL, socket_id,
> > - 0);
> > -
> > - if (sess_mp == NULL) {
> > - printf("Cannot create pool \"%s\" on socket %d\n",
> > - mp_name, socket_id);
> > - return -ENOMEM;
> > - }
> > -
> > - printf("Allocated pool \"%s\" on socket %d\n",
> > - mp_name, socket_id);
> > - session_pool_socket[socket_id].priv_mp = sess_mp;
> > - }
> > -
> > if (session_pool_socket[socket_id].sess_mp == NULL) {
> >
> > snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > "sess_mp_%u", socket_id);
> >
> > sess_mp =
> > rte_cryptodev_sym_session_pool_create(mp_name,
> > - nb_sessions, 0, 0, 0, socket_id);
> > + nb_sessions, session_priv_size,
> > + 0, 0, socket_id);
> >
> > if (sess_mp == NULL) {
> > printf("Cannot create pool \"%s\" on socket %d\n",
> > @@ -344,12 +323,9 @@ cperf_initialize_cryptodev(struct cperf_options
> > *opts, uint8_t *enabled_cdevs)
> > return ret;
> >
> > qp_conf.mp_session =
> > session_pool_socket[socket_id].sess_mp;
> > - qp_conf.mp_session_private =
> > - session_pool_socket[socket_id].priv_mp;
> >
> > if (opts->op_type == CPERF_ASYM_MODEX) {
> > qp_conf.mp_session = NULL;
> > - qp_conf.mp_session_private = NULL;
> > }
> >
> > ret = rte_cryptodev_configure(cdev_id, &conf);
> > @@ -704,7 +680,6 @@ main(int argc, char **argv)
> >
> > ctx[i] = cperf_testmap[opts.test].constructor(
> > session_pool_socket[socket_id].sess_mp,
> > - session_pool_socket[socket_id].priv_mp,
> > cdev_id, qp_id,
> > &opts, t_vec, &op_fns);
> > if (ctx[i] == NULL) {
> > diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> > index 82f819211a..e5c7930c63 100644
> > --- a/app/test/test_cryptodev.c
> > +++ b/app/test/test_cryptodev.c
> > @@ -79,7 +79,7 @@ struct crypto_unittest_params {
> > #endif
> >
> > union {
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> > #ifdef RTE_LIB_SECURITY
> > void *sec_session;
> > #endif
> > @@ -119,7 +119,7 @@
> > test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
> > uint8_t *hmac_key);
> >
> > static int
> > -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> > rte_cryptodev_sym_session *sess,
> > +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
> > struct crypto_unittest_params *ut_params,
> > struct crypto_testsuite_params *ts_param,
> > const uint8_t *cipher,
> > @@ -596,23 +596,11 @@ testsuite_setup(void)
> > }
> >
> > ts_params->session_mpool =
> > rte_cryptodev_sym_session_pool_create(
> > - "test_sess_mp", MAX_NB_SESSIONS, 0, 0, 0,
> > + "test_sess_mp", MAX_NB_SESSIONS, session_size, 0,
> > 0,
> > SOCKET_ID_ANY);
> > TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
> > "session mempool allocation failed");
> >
> > - ts_params->session_priv_mpool = rte_mempool_create(
> > - "test_sess_mp_priv",
> > - MAX_NB_SESSIONS,
> > - session_size,
> > - 0, 0, NULL, NULL, NULL,
> > - NULL, SOCKET_ID_ANY,
> > - 0);
> > - TEST_ASSERT_NOT_NULL(ts_params->session_priv_mpool,
> > - "session mempool allocation failed");
> > -
> > -
> > -
> > TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
> > &ts_params->conf),
> > "Failed to configure cryptodev %u with %u qps",
> > @@ -620,7 +608,6 @@ testsuite_setup(void)
> >
> > ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
> > ts_params->qp_conf.mp_session = ts_params->session_mpool;
> > - ts_params->qp_conf.mp_session_private = ts_params-
> > >session_priv_mpool;
> >
> > for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
> > TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> > @@ -650,11 +637,6 @@ testsuite_teardown(void)
> > }
> >
> > /* Free session mempools */
> > - if (ts_params->session_priv_mpool != NULL) {
> > - rte_mempool_free(ts_params->session_priv_mpool);
> > - ts_params->session_priv_mpool = NULL;
> > - }
> > -
> > if (ts_params->session_mpool != NULL) {
> > rte_mempool_free(ts_params->session_mpool);
> > ts_params->session_mpool = NULL;
> > @@ -1330,7 +1312,6 @@ dev_configure_and_start(uint64_t ff_disable)
> > ts_params->conf.ff_disable = ff_disable;
> > ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
> > ts_params->qp_conf.mp_session = ts_params->session_mpool;
> > - ts_params->qp_conf.mp_session_private = ts_params-
> > >session_priv_mpool;
> >
> > TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params-
> > >valid_devs[0],
> > &ts_params->conf),
> > @@ -1552,7 +1533,6 @@ test_queue_pair_descriptor_setup(void)
> > */
> > qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
> > qp_conf.mp_session = ts_params->session_mpool;
> > - qp_conf.mp_session_private = ts_params->session_priv_mpool;
> >
> > for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
> > TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> > @@ -2146,8 +2126,7 @@
> test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
> >
> > /* Create crypto session*/
> > rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > - ut_params->sess, &ut_params->cipher_xform,
> > - ts_params->session_priv_mpool);
> > + ut_params->sess, &ut_params->cipher_xform);
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > /* Generate crypto op data structure */
> > @@ -2247,7 +2226,7 @@
> > test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
> > uint8_t *hmac_key);
> >
> > static int
> > -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> > rte_cryptodev_sym_session *sess,
> > +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
> > struct crypto_unittest_params *ut_params,
> > struct crypto_testsuite_params *ts_params,
> > const uint8_t *cipher,
> > @@ -2288,7 +2267,7 @@
> > test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
> >
> >
> > static int
> > -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> > rte_cryptodev_sym_session *sess,
> > +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
> > struct crypto_unittest_params *ut_params,
> > struct crypto_testsuite_params *ts_params,
> > const uint8_t *cipher,
> > @@ -2401,8 +2380,7 @@ create_wireless_algo_hash_session(uint8_t
> dev_id,
> > ts_params->session_mpool);
> >
> > status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > - &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->auth_xform);
> > if (status == -ENOTSUP)
> > return TEST_SKIPPED;
> >
> > @@ -2443,8 +2421,7 @@ create_wireless_algo_cipher_session(uint8_t
> > dev_id,
> > ts_params->session_mpool);
> >
> > status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > - &ut_params->cipher_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->cipher_xform);
> > if (status == -ENOTSUP)
> > return TEST_SKIPPED;
> >
> > @@ -2566,8 +2543,7 @@
> create_wireless_algo_cipher_auth_session(uint8_t
> > dev_id,
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > - &ut_params->cipher_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->cipher_xform);
> > if (status == -ENOTSUP)
> > return TEST_SKIPPED;
> >
> > @@ -2629,8 +2605,7 @@ create_wireless_cipher_auth_session(uint8_t
> > dev_id,
> > ts_params->session_mpool);
> >
> > status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > - &ut_params->cipher_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->cipher_xform);
> > if (status == -ENOTSUP)
> > return TEST_SKIPPED;
> >
> > @@ -2699,13 +2674,11 @@
> > create_wireless_algo_auth_cipher_session(uint8_t dev_id,
> > ut_params->auth_xform.next = NULL;
> > ut_params->cipher_xform.next = &ut_params->auth_xform;
> > status = rte_cryptodev_sym_session_init(dev_id,
> > ut_params->sess,
> > - &ut_params->cipher_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->cipher_xform);
> >
> > } else
> > status = rte_cryptodev_sym_session_init(dev_id,
> > ut_params->sess,
> > - &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->auth_xform);
> >
> > if (status == -ENOTSUP)
> > return TEST_SKIPPED;
> > @@ -7838,8 +7811,7 @@ create_aead_session(uint8_t dev_id, enum
> > rte_crypto_aead_algorithm algo,
> > ts_params->session_mpool);
> >
> > rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > - &ut_params->aead_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->aead_xform);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -10992,8 +10964,7 @@ static int MD5_HMAC_create_session(struct
> > crypto_testsuite_params *ts_params,
> > ts_params->session_mpool);
> >
> > rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > - ut_params->sess, &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + ut_params->sess, &ut_params->auth_xform);
> >
> > if (ut_params->sess == NULL)
> > return TEST_FAILED;
> > @@ -11206,7 +11177,7 @@ test_multi_session(void)
> > struct crypto_unittest_params *ut_params = &unittest_params;
> >
> > struct rte_cryptodev_info dev_info;
> > - struct rte_cryptodev_sym_session **sessions;
> > + void **sessions;
> >
> > uint16_t i;
> >
> > @@ -11229,9 +11200,7 @@ test_multi_session(void)
> >
> > rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> >
> > - sessions = rte_malloc(NULL,
> > - sizeof(struct rte_cryptodev_sym_session *) *
> > - (MAX_NB_SESSIONS + 1), 0);
> > + sessions = rte_malloc(NULL, sizeof(void *) * (MAX_NB_SESSIONS +
> > 1), 0);
> >
> > /* Create multiple crypto sessions*/
> > for (i = 0; i < MAX_NB_SESSIONS; i++) {
> > @@ -11240,8 +11209,7 @@ test_multi_session(void)
> > ts_params->session_mpool);
> >
> > rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > - sessions[i], &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + sessions[i], &ut_params->auth_xform);
> > TEST_ASSERT_NOT_NULL(sessions[i],
> > "Session creation failed at session
> > number %u",
> > i);
> > @@ -11279,8 +11247,7 @@ test_multi_session(void)
> > sessions[i] = NULL;
> > /* Next session create should fail */
> > rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > - sessions[i], &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + sessions[i], &ut_params->auth_xform);
> > TEST_ASSERT_NULL(sessions[i],
> > "Session creation succeeded unexpectedly!");
> >
> > @@ -11311,7 +11278,7 @@ test_multi_session_random_usage(void)
> > {
> > struct crypto_testsuite_params *ts_params = &testsuite_params;
> > struct rte_cryptodev_info dev_info;
> > - struct rte_cryptodev_sym_session **sessions;
> > + void **sessions;
> > uint32_t i, j;
> > struct multi_session_params ut_paramz[] = {
> >
> > @@ -11355,8 +11322,7 @@ test_multi_session_random_usage(void)
> > rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> >
> > sessions = rte_malloc(NULL,
> > - (sizeof(struct rte_cryptodev_sym_session *)
> > - * MAX_NB_SESSIONS) + 1, 0);
> > + (sizeof(void *) * MAX_NB_SESSIONS) + 1, 0);
> >
> > for (i = 0; i < MB_SESSION_NUMBER; i++) {
> > sessions[i] = rte_cryptodev_sym_session_create(
> > @@ -11373,8 +11339,7 @@ test_multi_session_random_usage(void)
> > rte_cryptodev_sym_session_init(
> > ts_params->valid_devs[0],
> > sessions[i],
> > - &ut_paramz[i].ut_params.auth_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_paramz[i].ut_params.auth_xform);
> >
> > TEST_ASSERT_NOT_NULL(sessions[i],
> > "Session creation failed at session
> > number %u",
> > @@ -11457,8 +11422,7 @@ test_null_invalid_operation(void)
> >
> > /* Create Crypto session*/
> > ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > - ut_params->sess, &ut_params->cipher_xform,
> > - ts_params->session_priv_mpool);
> > + ut_params->sess, &ut_params->cipher_xform);
> > TEST_ASSERT(ret < 0,
> > "Session creation succeeded unexpectedly");
> >
> > @@ -11475,8 +11439,7 @@ test_null_invalid_operation(void)
> >
> > /* Create Crypto session*/
> > ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > - ut_params->sess, &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + ut_params->sess, &ut_params->auth_xform);
> > TEST_ASSERT(ret < 0,
> > "Session creation succeeded unexpectedly");
> >
> > @@ -11521,8 +11484,7 @@ test_null_burst_operation(void)
> >
> > /* Create Crypto session*/
> > rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > - ut_params->sess, &ut_params->cipher_xform,
> > - ts_params->session_priv_mpool);
> > + ut_params->sess, &ut_params->cipher_xform);
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > TEST_ASSERT_EQUAL(rte_crypto_op_bulk_alloc(ts_params-
> > >op_mpool,
> > @@ -11634,7 +11596,6 @@ test_enq_callback_setup(void)
> >
> > qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
> > qp_conf.mp_session = ts_params->session_mpool;
> > - qp_conf.mp_session_private = ts_params->session_priv_mpool;
> >
> > TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> > ts_params->valid_devs[0], qp_id, &qp_conf,
> > @@ -11734,7 +11695,6 @@ test_deq_callback_setup(void)
> >
> > qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
> > qp_conf.mp_session = ts_params->session_mpool;
> > - qp_conf.mp_session_private = ts_params->session_priv_mpool;
> >
> > TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> > ts_params->valid_devs[0], qp_id, &qp_conf,
> > @@ -11943,8 +11903,7 @@ static int create_gmac_session(uint8_t dev_id,
> > ts_params->session_mpool);
> >
> > rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > - &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->auth_xform);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -12588,8 +12547,7 @@ create_auth_session(struct
> > crypto_unittest_params *ut_params,
> > ts_params->session_mpool);
> >
> > rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > - &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->auth_xform);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -12641,8 +12599,7 @@ create_auth_cipher_session(struct
> > crypto_unittest_params *ut_params,
> > ts_params->session_mpool);
> >
> > rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > - &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->auth_xform);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -13149,8 +13106,7 @@ test_authenticated_encrypt_with_esn(
> >
> > rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > ut_params->sess,
> > - &ut_params->cipher_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->cipher_xform);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -13281,8 +13237,7 @@ test_authenticated_decrypt_with_esn(
> >
> > rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > ut_params->sess,
> > - &ut_params->auth_xform,
> > - ts_params->session_priv_mpool);
> > + &ut_params->auth_xform);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -14003,11 +13958,6 @@ test_scheduler_attach_worker_op(void)
> > rte_mempool_free(ts_params->session_mpool);
> > ts_params->session_mpool = NULL;
> > }
> > - if (ts_params->session_priv_mpool) {
> > - rte_mempool_free(ts_params-
> > >session_priv_mpool);
> > - ts_params->session_priv_mpool = NULL;
> > - }
> > -
> > if (info.sym.max_nb_sessions != 0 &&
> > info.sym.max_nb_sessions <
> > MAX_NB_SESSIONS) {
> > RTE_LOG(ERR, USER1,
> > @@ -14024,32 +13974,14 @@ test_scheduler_attach_worker_op(void)
> > ts_params->session_mpool =
> > rte_cryptodev_sym_session_pool_create(
> > "test_sess_mp",
> > - MAX_NB_SESSIONS, 0, 0, 0,
> > + MAX_NB_SESSIONS,
> > + session_size, 0, 0,
> > SOCKET_ID_ANY);
> > TEST_ASSERT_NOT_NULL(ts_params-
> > >session_mpool,
> > "session mempool allocation failed");
> > }
> >
> > - /*
> > - * Create mempool with maximum number of sessions,
> > - * to include device specific session private data
> > - */
> > - if (ts_params->session_priv_mpool == NULL) {
> > - ts_params->session_priv_mpool =
> > rte_mempool_create(
> > - "test_sess_mp_priv",
> > - MAX_NB_SESSIONS,
> > - session_size,
> > - 0, 0, NULL, NULL, NULL,
> > - NULL, SOCKET_ID_ANY,
> > - 0);
> > -
> > - TEST_ASSERT_NOT_NULL(ts_params-
> > >session_priv_mpool,
> > - "session mempool allocation failed");
> > - }
> > -
> > ts_params->qp_conf.mp_session = ts_params-
> > >session_mpool;
> > - ts_params->qp_conf.mp_session_private =
> > - ts_params->session_priv_mpool;
> >
> > ret = rte_cryptodev_scheduler_worker_attach(sched_id,
> > (uint8_t)i);
> > diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
> > index 1cdd84d01f..a3a10d484b 100644
> > --- a/app/test/test_cryptodev.h
> > +++ b/app/test/test_cryptodev.h
> > @@ -89,7 +89,6 @@ struct crypto_testsuite_params {
> > struct rte_mempool *large_mbuf_pool;
> > struct rte_mempool *op_mpool;
> > struct rte_mempool *session_mpool;
> > - struct rte_mempool *session_priv_mpool;
> > struct rte_cryptodev_config conf;
> > struct rte_cryptodev_qp_conf qp_conf;
> >
> > diff --git a/app/test/test_cryptodev_asym.c
> > b/app/test/test_cryptodev_asym.c
> > index 9d19a6d6d9..35da574da8 100644
> > --- a/app/test/test_cryptodev_asym.c
> > +++ b/app/test/test_cryptodev_asym.c
> > @@ -924,7 +924,6 @@ testsuite_setup(void)
> > /* configure qp */
> > ts_params->qp_conf.nb_descriptors =
> > DEFAULT_NUM_OPS_INFLIGHT;
> > ts_params->qp_conf.mp_session = ts_params->session_mpool;
> > - ts_params->qp_conf.mp_session_private = ts_params-
> > >session_mpool;
> > for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
> > TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> > dev_id, qp_id, &ts_params->qp_conf,
> > diff --git a/app/test/test_cryptodev_blockcipher.c
> > b/app/test/test_cryptodev_blockcipher.c
> > index 3cdb2c96e8..9417803f18 100644
> > --- a/app/test/test_cryptodev_blockcipher.c
> > +++ b/app/test/test_cryptodev_blockcipher.c
> > @@ -68,7 +68,6 @@ test_blockcipher_one_case(const struct
> > blockcipher_test_case *t,
> > struct rte_mempool *mbuf_pool,
> > struct rte_mempool *op_mpool,
> > struct rte_mempool *sess_mpool,
> > - struct rte_mempool *sess_priv_mpool,
> > uint8_t dev_id,
> > char *test_msg)
> > {
> > @@ -81,7 +80,7 @@ test_blockcipher_one_case(const struct
> > blockcipher_test_case *t,
> > struct rte_crypto_sym_op *sym_op = NULL;
> > struct rte_crypto_op *op = NULL;
> > struct rte_cryptodev_info dev_info;
> > - struct rte_cryptodev_sym_session *sess = NULL;
> > + void *sess = NULL;
> >
> > int status = TEST_SUCCESS;
> > const struct blockcipher_test_data *tdata = t->test_data;
> > @@ -514,7 +513,7 @@ test_blockcipher_one_case(const struct
> > blockcipher_test_case *t,
> > sess = rte_cryptodev_sym_session_create(sess_mpool);
> >
> > status = rte_cryptodev_sym_session_init(dev_id, sess,
> > - init_xform, sess_priv_mpool);
> > + init_xform);
> > if (status == -ENOTSUP) {
> > snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> > "UNSUPPORTED");
> > status = TEST_SKIPPED;
> > @@ -831,7 +830,6 @@ blockcipher_test_case_run(const void *data)
> > p_testsuite_params->mbuf_pool,
> > p_testsuite_params->op_mpool,
> > p_testsuite_params->session_mpool,
> > - p_testsuite_params->session_priv_mpool,
> > p_testsuite_params->valid_devs[0],
> > test_msg);
> > return status;
> > diff --git a/app/test/test_event_crypto_adapter.c
> > b/app/test/test_event_crypto_adapter.c
> > index 3ad20921e2..59229a1cde 100644
> > --- a/app/test/test_event_crypto_adapter.c
> > +++ b/app/test/test_event_crypto_adapter.c
> > @@ -61,7 +61,6 @@ struct event_crypto_adapter_test_params {
> > struct rte_mempool *mbuf_pool;
> > struct rte_mempool *op_mpool;
> > struct rte_mempool *session_mpool;
> > - struct rte_mempool *session_priv_mpool;
> > struct rte_cryptodev_config *config;
> > uint8_t crypto_event_port_id;
> > uint8_t internal_port_op_fwd;
> > @@ -167,7 +166,7 @@ static int
> > test_op_forward_mode(uint8_t session_less)
> > {
> > struct rte_crypto_sym_xform cipher_xform;
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> > union rte_event_crypto_metadata m_data;
> > struct rte_crypto_sym_op *sym_op;
> > struct rte_crypto_op *op;
> > @@ -203,7 +202,7 @@ test_op_forward_mode(uint8_t session_less)
> >
> > /* Create Crypto session*/
> > ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
> > - &cipher_xform,
> params.session_priv_mpool);
> > + &cipher_xform);
> > TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
> >
> > ret = rte_event_crypto_adapter_caps_get(evdev,
> > TEST_CDEV_ID,
> > @@ -367,7 +366,7 @@ static int
> > test_op_new_mode(uint8_t session_less)
> > {
> > struct rte_crypto_sym_xform cipher_xform;
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> > union rte_event_crypto_metadata m_data;
> > struct rte_crypto_sym_op *sym_op;
> > struct rte_crypto_op *op;
> > @@ -411,7 +410,7 @@ test_op_new_mode(uint8_t session_less)
> > &m_data, sizeof(m_data));
> > }
> > ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
> > - &cipher_xform,
> params.session_priv_mpool);
> > + &cipher_xform);
> > TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
> >
> > rte_crypto_op_attach_sym_session(op, sess);
> > @@ -553,22 +552,12 @@ configure_cryptodev(void)
> >
> > params.session_mpool = rte_cryptodev_sym_session_pool_create(
> > "CRYPTO_ADAPTER_SESSION_MP",
> > - MAX_NB_SESSIONS, 0, 0,
> > + MAX_NB_SESSIONS, session_size, 0,
> > sizeof(union rte_event_crypto_metadata),
> > SOCKET_ID_ANY);
> > TEST_ASSERT_NOT_NULL(params.session_mpool,
> > "session mempool allocation failed\n");
> >
> > - params.session_priv_mpool = rte_mempool_create(
> > - "CRYPTO_AD_SESS_MP_PRIV",
> > - MAX_NB_SESSIONS,
> > - session_size,
> > - 0, 0, NULL, NULL, NULL,
> > - NULL, SOCKET_ID_ANY,
> > - 0);
> > - TEST_ASSERT_NOT_NULL(params.session_priv_mpool,
> > - "session mempool allocation failed\n");
> > -
> > rte_cryptodev_info_get(TEST_CDEV_ID, &info);
> > conf.nb_queue_pairs = info.max_nb_queue_pairs;
> > conf.socket_id = SOCKET_ID_ANY;
> > @@ -580,7 +569,6 @@ configure_cryptodev(void)
> >
> > qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
> > qp_conf.mp_session = params.session_mpool;
> > - qp_conf.mp_session_private = params.session_priv_mpool;
> >
> > TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> > TEST_CDEV_ID, TEST_CDEV_QP_ID, &qp_conf,
> > @@ -934,12 +922,6 @@ crypto_teardown(void)
> > rte_mempool_free(params.session_mpool);
> > params.session_mpool = NULL;
> > }
> > - if (params.session_priv_mpool != NULL) {
> > - rte_mempool_avail_count(params.session_priv_mpool);
> > - rte_mempool_free(params.session_priv_mpool);
> > - params.session_priv_mpool = NULL;
> > - }
> > -
> > /* Free ops mempool */
> > if (params.op_mpool != NULL) {
> > RTE_LOG(DEBUG, USER1, "EVENT_CRYPTO_SYM_OP_POOL
> > count %u\n",
> > diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
> > index 2ffa2a8e79..134545efe1 100644
> > --- a/app/test/test_ipsec.c
> > +++ b/app/test/test_ipsec.c
> > @@ -355,20 +355,9 @@ testsuite_setup(void)
> > return TEST_FAILED;
> > }
> >
> > - ts_params->qp_conf.mp_session_private = rte_mempool_create(
> > - "test_priv_sess_mp",
> > - MAX_NB_SESSIONS,
> > - sess_sz,
> > - 0, 0, NULL, NULL, NULL,
> > - NULL, SOCKET_ID_ANY,
> > - 0);
> > -
> > - TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session_private,
> > - "private session mempool allocation failed");
> > -
> > ts_params->qp_conf.mp_session =
> > rte_cryptodev_sym_session_pool_create("test_sess_mp",
> > - MAX_NB_SESSIONS, 0, 0, 0, SOCKET_ID_ANY);
> > + MAX_NB_SESSIONS, sess_sz, 0, 0, SOCKET_ID_ANY);
> >
> > TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session,
> > "session mempool allocation failed");
> > @@ -413,11 +402,6 @@ testsuite_teardown(void)
> > rte_mempool_free(ts_params->qp_conf.mp_session);
> > ts_params->qp_conf.mp_session = NULL;
> > }
> > -
> > - if (ts_params->qp_conf.mp_session_private != NULL) {
> > - rte_mempool_free(ts_params-
> > >qp_conf.mp_session_private);
> > - ts_params->qp_conf.mp_session_private = NULL;
> > - }
> > }
> >
> > static int
> > @@ -644,7 +628,7 @@ create_crypto_session(struct ipsec_unitest_params
> > *ut,
> > struct rte_cryptodev_qp_conf *qp, uint8_t dev_id, uint32_t j)
> > {
> > int32_t rc;
> > - struct rte_cryptodev_sym_session *s;
> > + void *s;
> >
> > s = rte_cryptodev_sym_session_create(qp->mp_session);
> > if (s == NULL)
> > @@ -652,7 +636,7 @@ create_crypto_session(struct ipsec_unitest_params
> > *ut,
> >
> > /* initiliaze SA crypto session for device */
> > rc = rte_cryptodev_sym_session_init(dev_id, s,
> > - ut->crypto_xforms, qp->mp_session_private);
> > + ut->crypto_xforms);
> > if (rc == 0) {
> > ut->ss[j].crypto.ses = s;
> > return 0;
> > diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > index edb7275e76..75330292af 100644
> > --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > @@ -235,7 +235,6 @@ aesni_gcm_pmd_qp_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> > goto qp_setup_cleanup;
> >
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -259,10 +258,8 @@ aesni_gcm_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > aesni_gcm_pmd_sym_session_configure(struct rte_cryptodev *dev
> > __rte_unused,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> > struct aesni_gcm_private *internals = dev->data->dev_private;
> >
> > @@ -271,42 +268,24 @@ aesni_gcm_pmd_sym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - AESNI_GCM_LOG(ERR,
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > ret = aesni_gcm_set_session_parameters(internals->ops,
> > - sess_private_data, xform);
> > + sess, xform);
> > if (ret != 0) {
> > AESNI_GCM_LOG(ERR, "failed configure session
> > parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > + RTE_SET_USED(dev);
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > - memset(sess_priv, 0, sizeof(struct aesni_gcm_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > - }
> > + if (sess)
> > + memset(sess, 0, sizeof(struct aesni_gcm_session));
> > }
> >
> > struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
> > diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> > b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> > index 39c67e3952..efdc05c45f 100644
> > --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> > +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> > @@ -944,7 +944,6 @@ aesni_mb_pmd_qp_setup(struct rte_cryptodev
> *dev,
> > uint16_t qp_id,
> > }
> >
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->stats, 0, sizeof(qp->stats));
> >
> > @@ -974,11 +973,8 @@ aesni_mb_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > /** Configure a aesni multi-buffer session from a crypto xform chain */
> > static int
> > aesni_mb_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > - struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + struct rte_crypto_sym_xform *xform, void *sess)
> > {
> > - void *sess_private_data;
> > struct aesni_mb_private *internals = dev->data->dev_private;
> > int ret;
> >
> > @@ -987,43 +983,25 @@ aesni_mb_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - AESNI_MB_LOG(ERR,
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > ret = aesni_mb_set_session_parameters(internals->mb_mgr,
> > - sess_private_data, xform);
> > + sess, xform);
> > if (ret != 0) {
> > AESNI_MB_LOG(ERR, "failed configure session parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -aesni_mb_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +aesni_mb_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > + RTE_SET_USED(dev);
> >
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > - memset(sess_priv, 0, sizeof(struct aesni_mb_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > - }
> > + if (sess)
> > + memset(sess, 0, sizeof(struct aesni_mb_session));
> > }
> >
> > struct rte_cryptodev_ops aesni_mb_pmd_ops = {
> > diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> > b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> > index 1b2749fe62..2d3b54b063 100644
> > --- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> > +++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> > @@ -244,7 +244,6 @@ armv8_crypto_pmd_qp_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> > goto qp_setup_cleanup;
> >
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->stats, 0, sizeof(qp->stats));
> >
> > @@ -268,10 +267,8 @@ armv8_crypto_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > armv8_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> >
> > if (unlikely(sess == NULL)) {
> > @@ -279,42 +276,23 @@
> armv8_crypto_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - CDEV_LOG_ERR(
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > - ret = armv8_crypto_set_session_parameters(sess_private_data,
> > xform);
> > + ret = armv8_crypto_set_session_parameters(sess, xform);
> > if (ret != 0) {
> > ARMV8_CRYPTO_LOG_ERR("failed configure session
> > parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> > *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > + RTE_SET_USED(dev);
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > - memset(sess_priv, 0, sizeof(struct armv8_crypto_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > - }
> > + if (sess)
> > + memset(sess, 0, sizeof(struct armv8_crypto_session));
> > }
> >
> > struct rte_cryptodev_ops armv8_crypto_pmd_ops = {
> > diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c
> > b/drivers/crypto/bcmfs/bcmfs_sym_session.c
> > index 675ed0ad55..b4b167d0c2 100644
> > --- a/drivers/crypto/bcmfs/bcmfs_sym_session.c
> > +++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
> > @@ -224,10 +224,9 @@ bcmfs_sym_get_session(struct rte_crypto_op
> *op)
> > int
> > bcmfs_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > + RTE_SET_USED(dev);
> > int ret;
> >
> > if (unlikely(sess == NULL)) {
> > @@ -235,44 +234,23 @@ bcmfs_sym_session_configure(struct
> > rte_cryptodev *dev,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - BCMFS_DP_LOG(ERR,
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > - ret = crypto_set_session_parameters(sess_private_data, xform);
> > + ret = crypto_set_session_parameters(sess, xform);
> >
> > if (ret != 0) {
> > BCMFS_DP_LOG(ERR, "Failed configure session parameters");
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /* Clear the memory of session so it doesn't leave key material behind */
> > void
> > -bcmfs_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > - if (sess_priv) {
> > - struct rte_mempool *sess_mp;
> > -
> > - memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
> > - sess_mp = rte_mempool_from_obj(sess_priv);
> > -
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > - }
> > + RTE_SET_USED(dev);
> > + if (sess)
> > + memset(sess, 0, sizeof(struct bcmfs_sym_session));
> > }
> >
> > unsigned int
> > diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h
> > b/drivers/crypto/bcmfs/bcmfs_sym_session.h
> > index d40595b4bd..7faafe2fd5 100644
> > --- a/drivers/crypto/bcmfs/bcmfs_sym_session.h
> > +++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
> > @@ -93,12 +93,10 @@ bcmfs_process_crypto_op(struct rte_crypto_op
> *op,
> > int
> > bcmfs_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool);
> > + void *sess);
> >
> > void
> > -bcmfs_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess);
> > +bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess);
> >
> > unsigned int
> > bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev
> > __rte_unused);
> > diff --git a/drivers/crypto/caam_jr/caam_jr.c
> > b/drivers/crypto/caam_jr/caam_jr.c
> > index ce7a100778..8a04820fa6 100644
> > --- a/drivers/crypto/caam_jr/caam_jr.c
> > +++ b/drivers/crypto/caam_jr/caam_jr.c
> > @@ -1692,52 +1692,36 @@ caam_jr_set_session_parameters(struct
> > rte_cryptodev *dev,
> > static int
> > caam_jr_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> >
> > PMD_INIT_FUNC_TRACE();
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - CAAM_JR_ERR("Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > - memset(sess_private_data, 0, sizeof(struct caam_jr_session));
> > - ret = caam_jr_set_session_parameters(dev, xform,
> > sess_private_data);
> > + memset(sess, 0, sizeof(struct caam_jr_session));
> > + ret = caam_jr_set_session_parameters(dev, xform, sess);
> > if (ret != 0) {
> > CAAM_JR_ERR("failed to configure session parameters");
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > sess_private_data);
> > -
> > return 0;
> > }
> >
> > /* Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -caam_jr_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +caam_jr_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > - struct caam_jr_session *s = (struct caam_jr_session *)sess_priv;
> > + RTE_SET_USED(dev);
> > +
> > + struct caam_jr_session *s = (struct caam_jr_session *)sess;
> >
> > PMD_INIT_FUNC_TRACE();
> >
> > - if (sess_priv) {
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -
> > + if (sess) {
> > rte_free(s->cipher_key.data);
> > rte_free(s->auth_key.data);
> > memset(s, 0, sizeof(struct caam_jr_session));
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > }
> > }
> >
> > diff --git a/drivers/crypto/ccp/ccp_pmd_ops.c
> > b/drivers/crypto/ccp/ccp_pmd_ops.c
> > index 0d615d311c..cac1268130 100644
> > --- a/drivers/crypto/ccp/ccp_pmd_ops.c
> > +++ b/drivers/crypto/ccp/ccp_pmd_ops.c
> > @@ -727,7 +727,6 @@ ccp_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> > }
> >
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > /* mempool for batch info */
> > qp->batch_mp = rte_mempool_create(
> > @@ -758,11 +757,9 @@ ccp_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > ccp_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > int ret;
> > - void *sess_private_data;
> > struct ccp_private *internals;
> >
> > if (unlikely(sess == NULL || xform == NULL)) {
> > @@ -770,39 +767,22 @@ ccp_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> > return -ENOMEM;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - CCP_LOG_ERR("Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > internals = (struct ccp_private *)dev->data->dev_private;
> > - ret = ccp_set_session_parameters(sess_private_data, xform,
> > internals);
> > + ret = ccp_set_session_parameters(sess, xform, internals);
> > if (ret != 0) {
> > CCP_LOG_ERR("failed configure session parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> >
> > return 0;
> > }
> >
> > static void
> > -ccp_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +ccp_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > - if (sess_priv) {
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -
> > - rte_mempool_put(sess_mp, sess_priv);
> > - memset(sess_priv, 0, sizeof(struct ccp_session));
> > - set_sym_session_private_data(sess, index, NULL);
> > - }
> > + RTE_SET_USED(dev);
> > + if (sess)
> > + memset(sess, 0, sizeof(struct ccp_session));
> > }
> >
> > struct rte_cryptodev_ops ccp_ops = {
> > diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> > b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> > index 99968cc353..50cae5e3d6 100644
> > --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> > +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> > @@ -32,17 +32,18 @@ cn10k_cpt_sym_temp_sess_create(struct
> > cnxk_cpt_qp *qp, struct rte_crypto_op *op)
> > if (sess == NULL)
> > return NULL;
> >
> > - ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op-
> > >xform,
> > - sess, qp->sess_mp_priv);
> > + sess->sess_data[driver_id].data =
> > + (void *)((uint8_t *)sess +
> > + rte_cryptodev_sym_get_header_session_size() +
> > + (driver_id * sess->priv_sz));
> > + priv = get_sym_session_private_data(sess, driver_id);
> > + ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform, (void
> > *)priv);
> > if (ret)
> > goto sess_put;
> >
> > - priv = get_sym_session_private_data(sess, driver_id);
> > -
> > sym_op->session = sess;
> >
> > return priv;
> > -
> > sess_put:
> > rte_mempool_put(qp->sess_mp, sess);
> > return NULL;
> > @@ -144,9 +145,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct
> > rte_crypto_op *ops[],
> > ret = cpt_sym_inst_fill(qp, op, sess, infl_req,
> > &inst[0]);
> > if (unlikely(ret)) {
> > -
> > sym_session_clear(cn10k_cryptodev_driver_id,
> > - op->sym->session);
> > - rte_mempool_put(qp->sess_mp, op->sym-
> > >session);
> > + sym_session_clear(op->sym->session);
> > return 0;
> > }
> > w7 = sess->cpt_inst_w7;
> > @@ -437,8 +436,7 @@ cn10k_cpt_dequeue_post_process(struct
> > cnxk_cpt_qp *qp,
> > temp_sess_free:
> > if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
> > if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
> > - sym_session_clear(cn10k_cryptodev_driver_id,
> > - cop->sym->session);
> > + sym_session_clear(cop->sym->session);
> > sz =
> > rte_cryptodev_sym_get_existing_header_session_size(
> > cop->sym->session);
> > memset(cop->sym->session, 0, sz);
> > diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> > b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> > index 4c2dc5b080..5f83581131 100644
> > --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> > +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> > @@ -81,17 +81,19 @@ cn9k_cpt_sym_temp_sess_create(struct
> > cnxk_cpt_qp *qp, struct rte_crypto_op *op)
> > if (sess == NULL)
> > return NULL;
> >
> > - ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op-
> > >xform,
> > - sess, qp->sess_mp_priv);
> > + sess->sess_data[driver_id].data =
> > + (void *)((uint8_t *)sess +
> > + rte_cryptodev_sym_get_header_session_size() +
> > + (driver_id * sess->priv_sz));
> > + priv = get_sym_session_private_data(sess, driver_id);
> > + ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform,
> > + (void *)priv);
> > if (ret)
> > goto sess_put;
> >
> > - priv = get_sym_session_private_data(sess, driver_id);
> > -
> > sym_op->session = sess;
> >
> > return priv;
> > -
> > sess_put:
> > rte_mempool_put(qp->sess_mp, sess);
> > return NULL;
> > @@ -126,8 +128,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct
> > rte_crypto_op *op,
> > ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req,
> > inst);
> > if (unlikely(ret)) {
> > -
> > sym_session_clear(cn9k_cryptodev_driver_id,
> > - op->sym->session);
> > + sym_session_clear(op->sym->session);
> > rte_mempool_put(qp->sess_mp, op->sym-
> > >session);
> > }
> > inst->w7.u64 = sess->cpt_inst_w7;
> > @@ -484,8 +485,7 @@ cn9k_cpt_dequeue_post_process(struct
> > cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
> > temp_sess_free:
> > if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
> > if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
> > - sym_session_clear(cn9k_cryptodev_driver_id,
> > - cop->sym->session);
> > + sym_session_clear(cop->sym->session);
> > sz =
> > rte_cryptodev_sym_get_existing_header_session_size(
> > cop->sym->session);
> > memset(cop->sym->session, 0, sz);
> > diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> > b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> > index 41d8fe49e1..52d9cf0cf3 100644
> > --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> > +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> > @@ -379,7 +379,6 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> > }
> >
> > qp->sess_mp = conf->mp_session;
> > - qp->sess_mp_priv = conf->mp_session_private;
> > dev->data->queue_pairs[qp_id] = qp;
> >
> > return 0;
> > @@ -493,27 +492,20 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess,
> > struct roc_cpt *roc_cpt)
> > }
> >
> > int
> > -sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
> > +sym_session_configure(struct roc_cpt *roc_cpt,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *pool)
> > + void *sess)
> > {
> > struct cnxk_se_sess *sess_priv;
> > - void *priv;
> > int ret;
> >
> > ret = sym_xform_verify(xform);
> > if (unlikely(ret < 0))
> > return ret;
> >
> > - if (unlikely(rte_mempool_get(pool, &priv))) {
> > - plt_dp_err("Could not allocate session private data");
> > - return -ENOMEM;
> > - }
> > + memset(sess, 0, sizeof(struct cnxk_se_sess));
> >
> > - memset(priv, 0, sizeof(struct cnxk_se_sess));
> > -
> > - sess_priv = priv;
> > + sess_priv = sess;
> >
> > switch (ret) {
> > case CNXK_CPT_CIPHER:
> > @@ -547,7 +539,7 @@ sym_session_configure(struct roc_cpt *roc_cpt, int
> > driver_id,
> > }
> >
> > if (ret)
> > - goto priv_put;
> > + return -ENOTSUP;
> >
> > if ((sess_priv->roc_se_ctx.fc_type == ROC_SE_HASH_HMAC) &&
> > cpt_mac_len_verify(&xform->auth)) {
> > @@ -557,66 +549,45 @@ sym_session_configure(struct roc_cpt *roc_cpt,
> int
> > driver_id,
> > sess_priv->roc_se_ctx.auth_key = NULL;
> > }
> >
> > - ret = -ENOTSUP;
> > - goto priv_put;
> > + return -ENOTSUP;
> > }
> >
> > sess_priv->cpt_inst_w7 = cnxk_cpt_inst_w7_get(sess_priv, roc_cpt);
> >
> > - set_sym_session_private_data(sess, driver_id, sess_priv);
> > -
> > return 0;
> > -
> > -priv_put:
> > - rte_mempool_put(pool, priv);
> > -
> > - return -ENOTSUP;
> > }
> >
> > int
> > cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *pool)
> > + void *sess)
> > {
> > struct cnxk_cpt_vf *vf = dev->data->dev_private;
> > struct roc_cpt *roc_cpt = &vf->cpt;
> > - uint8_t driver_id;
> >
> > - driver_id = dev->driver_id;
> > -
> > - return sym_session_configure(roc_cpt, driver_id, xform, sess, pool);
> > + return sym_session_configure(roc_cpt, xform, sess);
> > }
> >
> > void
> > -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> > +sym_session_clear(void *sess)
> > {
> > - void *priv = get_sym_session_private_data(sess, driver_id);
> > - struct cnxk_se_sess *sess_priv;
> > - struct rte_mempool *pool;
> > + struct cnxk_se_sess *sess_priv = sess;
> >
> > - if (priv == NULL)
> > + if (sess == NULL)
> > return;
> >
> > - sess_priv = priv;
> > -
> > if (sess_priv->roc_se_ctx.auth_key != NULL)
> > plt_free(sess_priv->roc_se_ctx.auth_key);
> >
> > - memset(priv, 0, cnxk_cpt_sym_session_get_size(NULL));
> > -
> > - pool = rte_mempool_from_obj(priv);
> > -
> > - set_sym_session_private_data(sess, driver_id, NULL);
> > -
> > - rte_mempool_put(pool, priv);
> > + memset(sess_priv, 0, cnxk_cpt_sym_session_get_size(NULL));
> > }
> >
> > void
> > -cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - return sym_session_clear(dev->driver_id, sess);
> > + RTE_SET_USED(dev);
> > +
> > + return sym_session_clear(sess);
> > }
> >
> > unsigned int
> > diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> > b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> > index c5332dec53..3c09d10582 100644
> > --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> > +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> > @@ -111,18 +111,15 @@ unsigned int
> > cnxk_cpt_sym_session_get_size(struct rte_cryptodev *dev);
> >
> > int cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *pool);
> > + void *sess);
> >
> > -int sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
> > +int sym_session_configure(struct roc_cpt *roc_cpt,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *pool);
> > + void *sess);
> >
> > -void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess);
> > +void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess);
> >
> > -void sym_session_clear(int driver_id, struct rte_cryptodev_sym_session
> > *sess);
> > +void sym_session_clear(void *sess);
> >
> > unsigned int cnxk_ae_session_size_get(struct rte_cryptodev *dev
> > __rte_unused);
> >
> > diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > index 176f1a27a0..42229763f8 100644
> > --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > @@ -3438,49 +3438,32 @@ dpaa2_sec_security_session_destroy(void
> *dev
> > __rte_unused, void *sess)
> > static int
> > dpaa2_sec_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - DPAA2_SEC_ERR("Couldn't get object from session
> > mempool");
> > - return -ENOMEM;
> > - }
> > -
> > - ret = dpaa2_sec_set_session_parameters(dev, xform,
> > sess_private_data);
> > + ret = dpaa2_sec_set_session_parameters(dev, xform, sess);
> > if (ret != 0) {
> > DPAA2_SEC_ERR("Failed to configure session parameters");
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > PMD_INIT_FUNC_TRACE();
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > - dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
> > + RTE_SET_USED(dev);
> > + dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
> >
> > - if (sess_priv) {
> > + if (sess) {
> > rte_free(s->ctxt);
> > rte_free(s->cipher_key.data);
> > rte_free(s->auth_key.data);
> > memset(s, 0, sizeof(dpaa2_sec_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > }
> > }
> >
> > diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c
> > b/drivers/crypto/dpaa_sec/dpaa_sec.c
> > index 5a087df090..4727088b45 100644
> > --- a/drivers/crypto/dpaa_sec/dpaa_sec.c
> > +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
> > @@ -2537,33 +2537,18 @@ dpaa_sec_set_session_parameters(struct
> > rte_cryptodev *dev,
> >
> > static int
> > dpaa_sec_sym_session_configure(struct rte_cryptodev *dev,
> > - struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + struct rte_crypto_sym_xform *xform, void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> >
> > PMD_INIT_FUNC_TRACE();
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - DPAA_SEC_ERR("Couldn't get object from session
> > mempool");
> > - return -ENOMEM;
> > - }
> > -
> > - ret = dpaa_sec_set_session_parameters(dev, xform,
> > sess_private_data);
> > + ret = dpaa_sec_set_session_parameters(dev, xform, sess);
> > if (ret != 0) {
> > DPAA_SEC_ERR("failed to configure session parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > -
> > return 0;
> > }
> >
> > @@ -2584,18 +2569,14 @@ free_session_memory(struct rte_cryptodev
> > *dev, dpaa_sec_session *s)
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -dpaa_sec_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +dpaa_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > PMD_INIT_FUNC_TRACE();
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > - dpaa_sec_session *s = (dpaa_sec_session *)sess_priv;
> > + RTE_SET_USED(dev);
> > + dpaa_sec_session *s = (dpaa_sec_session *)sess;
> >
> > - if (sess_priv) {
> > + if (sess)
> > free_session_memory(dev, s);
> > - set_sym_session_private_data(sess, index, NULL);
> > - }
> > }
> >
> > #ifdef RTE_LIB_SECURITY
> > diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> > b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> > index f075054807..b2e5c92598 100644
> > --- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> > +++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> > @@ -220,7 +220,6 @@ kasumi_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> >
> > qp->mgr = internals->mgr;
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -243,10 +242,8 @@ kasumi_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > kasumi_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> > struct kasumi_private *internals = dev->data->dev_private;
> >
> > @@ -255,43 +252,24 @@ kasumi_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - KASUMI_LOG(ERR,
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > ret = kasumi_set_session_parameters(internals->mgr,
> > - sess_private_data, xform);
> > + sess, xform);
> > if (ret != 0) {
> > KASUMI_LOG(ERR, "failed configure session parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -kasumi_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +kasumi_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > + RTE_SET_USED(dev);
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > - memset(sess_priv, 0, sizeof(struct kasumi_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > - }
> > + if (sess)
> > + memset(sess, 0, sizeof(struct kasumi_session));
> > }
> >
> > struct rte_cryptodev_ops kasumi_pmd_ops = {
> > diff --git a/drivers/crypto/mlx5/mlx5_crypto.c
> > b/drivers/crypto/mlx5/mlx5_crypto.c
> > index 682cf8b607..615ab9f45d 100644
> > --- a/drivers/crypto/mlx5/mlx5_crypto.c
> > +++ b/drivers/crypto/mlx5/mlx5_crypto.c
> > @@ -165,14 +165,12 @@ mlx5_crypto_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *session,
> > - struct rte_mempool *mp)
> > + void *session)
> > {
> > struct mlx5_crypto_priv *priv = dev->data->dev_private;
> > - struct mlx5_crypto_session *sess_private_data;
> > + struct mlx5_crypto_session *sess_private_data = session;
> > struct rte_crypto_cipher_xform *cipher;
> > uint8_t encryption_order;
> > - int ret;
> >
> > if (unlikely(xform->next != NULL)) {
> > DRV_LOG(ERR, "Xform next is not supported.");
> > @@ -183,17 +181,9 @@ mlx5_crypto_sym_session_configure(struct
> > rte_cryptodev *dev,
> > DRV_LOG(ERR, "Only AES-XTS algorithm is supported.");
> > return -ENOTSUP;
> > }
> > - ret = rte_mempool_get(mp, (void *)&sess_private_data);
> > - if (ret != 0) {
> > - DRV_LOG(ERR,
> > - "Failed to get session %p private data from
> > mempool.",
> > - sess_private_data);
> > - return -ENOMEM;
> > - }
> > cipher = &xform->cipher;
> > sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher);
> > if (sess_private_data->dek == NULL) {
> > - rte_mempool_put(mp, sess_private_data);
> > DRV_LOG(ERR, "Failed to prepare dek.");
> > return -ENOMEM;
> > }
> > @@ -228,27 +218,21 @@ mlx5_crypto_sym_session_configure(struct
> > rte_cryptodev *dev,
> > sess_private_data->dek_id =
> > rte_cpu_to_be_32(sess_private_data->dek->obj->id
> > &
> > 0xffffff);
> > - set_sym_session_private_data(session, dev->driver_id,
> > - sess_private_data);
> > DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data);
> > return 0;
> > }
> >
> > static void
> > -mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > struct mlx5_crypto_priv *priv = dev->data->dev_private;
> > - struct mlx5_crypto_session *spriv =
> > get_sym_session_private_data(sess,
> > - dev-
> > >driver_id);
> > + struct mlx5_crypto_session *spriv = sess;
> >
> > if (unlikely(spriv == NULL)) {
> > DRV_LOG(ERR, "Failed to get session %p private data.",
> spriv);
> > return;
> > }
> > mlx5_crypto_dek_destroy(priv, spriv->dek);
> > - set_sym_session_private_data(sess, dev->driver_id, NULL);
> > - rte_mempool_put(rte_mempool_from_obj(spriv), spriv);
> > DRV_LOG(DEBUG, "Session %p was cleared.", spriv);
> > }
> >
> > diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> > b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> > index e04a2c88c7..2e4b27ea21 100644
> > --- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> > +++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> > @@ -704,7 +704,6 @@ mrvl_crypto_pmd_qp_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> > break;
> >
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->stats, 0, sizeof(qp->stats));
> > dev->data->queue_pairs[qp_id] = qp;
> > @@ -735,12 +734,9 @@
> > mrvl_crypto_pmd_sym_session_get_size(__rte_unused struct
> > rte_cryptodev *dev)
> > */
> > static int
> > mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> > rte_cryptodev *dev,
> > - struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mp)
> > + struct rte_crypto_sym_xform *xform, void *sess)
> > {
> > struct mrvl_crypto_session *mrvl_sess;
> > - void *sess_private_data;
> > int ret;
> >
> > if (sess == NULL) {
> > @@ -748,25 +744,16 @@
> > mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> > rte_cryptodev *dev,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mp, &sess_private_data)) {
> > - CDEV_LOG_ERR("Couldn't get object from session
> > mempool.");
> > - return -ENOMEM;
> > - }
> > + memset(sess, 0, sizeof(struct mrvl_crypto_session));
> >
> > - memset(sess_private_data, 0, sizeof(struct mrvl_crypto_session));
> > -
> > - ret = mrvl_crypto_set_session_parameters(sess_private_data,
> > xform);
> > + ret = mrvl_crypto_set_session_parameters(sess, xform);
> > if (ret != 0) {
> > MRVL_LOG(ERR, "Failed to configure session parameters!");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mp, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > sess_private_data);
> >
> > - mrvl_sess = (struct mrvl_crypto_session *)sess_private_data;
> > + mrvl_sess = (struct mrvl_crypto_session *)sess;
> > if (sam_session_create(&mrvl_sess->sam_sess_params,
> > &mrvl_sess->sam_sess) < 0) {
> > MRVL_LOG(DEBUG, "Failed to create session!");
> > @@ -789,17 +776,13 @@
> > mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> > rte_cryptodev *dev,
> > * @returns 0. Always.
> > */
> > static void
> > -mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> > *sess)
> > {
> > -
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > + RTE_SET_USED(dev);
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > + if (sess) {
> > struct mrvl_crypto_session *mrvl_sess =
> > - (struct mrvl_crypto_session *)sess_priv;
> > + (struct mrvl_crypto_session *)sess;
> >
> > if (mrvl_sess->sam_sess &&
> > sam_session_destroy(mrvl_sess->sam_sess) < 0) {
> > @@ -807,9 +790,6 @@ mrvl_crypto_pmd_sym_session_clear(struct
> > rte_cryptodev *dev,
> > }
> >
> > memset(mrvl_sess, 0, sizeof(struct mrvl_crypto_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > }
> > }
> >
> > diff --git a/drivers/crypto/nitrox/nitrox_sym.c
> > b/drivers/crypto/nitrox/nitrox_sym.c
> > index f8b7edcd69..0c9bbfef46 100644
> > --- a/drivers/crypto/nitrox/nitrox_sym.c
> > +++ b/drivers/crypto/nitrox/nitrox_sym.c
> > @@ -532,22 +532,16 @@ configure_aead_ctx(struct
> rte_crypto_aead_xform
> > *xform,
> > static int
> > nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *mp_obj;
> > struct nitrox_crypto_ctx *ctx;
> > struct rte_crypto_cipher_xform *cipher_xform = NULL;
> > struct rte_crypto_auth_xform *auth_xform = NULL;
> > struct rte_crypto_aead_xform *aead_xform = NULL;
> > int ret = -EINVAL;
> >
> > - if (rte_mempool_get(mempool, &mp_obj)) {
> > - NITROX_LOG(ERR, "Couldn't allocate context\n");
> > - return -ENOMEM;
> > - }
> > -
> > - ctx = mp_obj;
> > + RTE_SET_USED(cdev);
> > + ctx = sess;
> > ctx->nitrox_chain = get_crypto_chain_order(xform);
> > switch (ctx->nitrox_chain) {
> > case NITROX_CHAIN_CIPHER_ONLY:
> > @@ -586,28 +580,17 @@ nitrox_sym_dev_sess_configure(struct
> > rte_cryptodev *cdev,
> > }
> >
> > ctx->iova = rte_mempool_virt2iova(ctx);
> > - set_sym_session_private_data(sess, cdev->driver_id, ctx);
> > return 0;
> > err:
> > - rte_mempool_put(mempool, mp_obj);
> > return ret;
> > }
> >
> > static void
> > -nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev,
> > - struct rte_cryptodev_sym_session *sess)
> > +nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev, void *sess)
> > {
> > - struct nitrox_crypto_ctx *ctx = get_sym_session_private_data(sess,
> > - cdev->driver_id);
> > - struct rte_mempool *sess_mp;
> > -
> > - if (!ctx)
> > - return;
> > -
> > - memset(ctx, 0, sizeof(*ctx));
> > - sess_mp = rte_mempool_from_obj(ctx);
> > - set_sym_session_private_data(sess, cdev->driver_id, NULL);
> > - rte_mempool_put(sess_mp, ctx);
> > + RTE_SET_USED(cdev);
> > + if (sess)
> > + memset(sess, 0, sizeof(struct nitrox_crypto_ctx));
> > }
> >
> > static struct nitrox_crypto_ctx *
> > diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c
> > b/drivers/crypto/null/null_crypto_pmd_ops.c
> > index a8b5a06e7f..65bfa8dcf7 100644
> > --- a/drivers/crypto/null/null_crypto_pmd_ops.c
> > +++ b/drivers/crypto/null/null_crypto_pmd_ops.c
> > @@ -234,7 +234,6 @@ null_crypto_pmd_qp_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> > }
> >
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -258,10 +257,8 @@ null_crypto_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > null_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev
> > __rte_unused,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mp)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> >
> > if (unlikely(sess == NULL)) {
> > @@ -269,42 +266,23 @@ null_crypto_pmd_sym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mp, &sess_private_data)) {
> > - NULL_LOG(ERR,
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > - ret = null_crypto_set_session_parameters(sess_private_data,
> > xform);
> > + ret = null_crypto_set_session_parameters(sess, xform);
> > if (ret != 0) {
> > NULL_LOG(ERR, "failed configure session parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mp, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> > *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > + RTE_SET_USED(dev);
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > - memset(sess_priv, 0, sizeof(struct null_crypto_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > - }
> > + if (sess)
> > + memset(sess, 0, sizeof(struct null_crypto_session));
> > }
> >
> > static struct rte_cryptodev_ops pmd_ops = {
> > diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> > b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> > index 7c6b1e45b4..95659e472b 100644
> > --- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> > +++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> > @@ -49,7 +49,6 @@ struct cpt_instance {
> > uint32_t queue_id;
> > uintptr_t rsvd;
> > struct rte_mempool *sess_mp;
> > - struct rte_mempool *sess_mp_priv;
> > struct cpt_qp_meta_info meta_info;
> > uint8_t ca_enabled;
> > };
> > diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > index 9e8fd495cf..abd0963be0 100644
> > --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > @@ -171,7 +171,6 @@ otx_cpt_que_pair_setup(struct rte_cryptodev *dev,
> >
> > instance->queue_id = que_pair_id;
> > instance->sess_mp = qp_conf->mp_session;
> > - instance->sess_mp_priv = qp_conf->mp_session_private;
> > dev->data->queue_pairs[que_pair_id] = instance;
> >
> > return 0;
> > @@ -243,29 +242,22 @@ sym_xform_verify(struct rte_crypto_sym_xform
> > *xform)
> > }
> >
> > static int
> > -sym_session_configure(int driver_id, struct rte_crypto_sym_xform
> *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *pool)
> > +sym_session_configure(struct rte_crypto_sym_xform *xform,
> > + void *sess)
> > {
> > struct rte_crypto_sym_xform *temp_xform = xform;
> > struct cpt_sess_misc *misc;
> > vq_cmd_word3_t vq_cmd_w3;
> > - void *priv;
> > int ret;
> >
> > ret = sym_xform_verify(xform);
> > if (unlikely(ret))
> > return ret;
> >
> > - if (unlikely(rte_mempool_get(pool, &priv))) {
> > - CPT_LOG_ERR("Could not allocate session private data");
> > - return -ENOMEM;
> > - }
> > -
> > - memset(priv, 0, sizeof(struct cpt_sess_misc) +
> > + memset(sess, 0, sizeof(struct cpt_sess_misc) +
> > offsetof(struct cpt_ctx, mc_ctx));
> >
> > - misc = priv;
> > + misc = sess;
> >
> > for ( ; xform != NULL; xform = xform->next) {
> > switch (xform->type) {
> > @@ -301,8 +293,6 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> > goto priv_put;
> > }
> >
> > - set_sym_session_private_data(sess, driver_id, priv);
> > -
> > misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
> > sizeof(struct cpt_sess_misc);
> >
> > @@ -316,56 +306,46 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> > return 0;
> >
> > priv_put:
> > - if (priv)
> > - rte_mempool_put(pool, priv);
> > return -ENOTSUP;
> > }
> >
> > static void
> > -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> > +sym_session_clear(void *sess)
> > {
> > - void *priv = get_sym_session_private_data(sess, driver_id);
> > struct cpt_sess_misc *misc;
> > - struct rte_mempool *pool;
> > struct cpt_ctx *ctx;
> >
> > - if (priv == NULL)
> > + if (sess == NULL)
> > return;
> >
> > - misc = priv;
> > + misc = sess;
> > ctx = SESS_PRIV(misc);
> >
> > if (ctx->auth_key != NULL)
> > rte_free(ctx->auth_key);
> >
> > - memset(priv, 0, cpt_get_session_size());
> > -
> > - pool = rte_mempool_from_obj(priv);
> > -
> > - set_sym_session_private_data(sess, driver_id, NULL);
> > -
> > - rte_mempool_put(pool, priv);
> > + memset(sess, 0, cpt_get_session_size());
> > }
> >
> > static int
> > otx_cpt_session_cfg(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *pool)
> > + void *sess)
> > {
> > CPT_PMD_INIT_FUNC_TRACE();
> > + RTE_SET_USED(dev);
> >
> > - return sym_session_configure(dev->driver_id, xform, sess, pool);
> > + return sym_session_configure(xform, sess);
> > }
> >
> >
> > static void
> > -otx_cpt_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +otx_cpt_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > CPT_PMD_INIT_FUNC_TRACE();
> > + RTE_SET_USED(dev);
> >
> > - return sym_session_clear(dev->driver_id, sess);
> > + return sym_session_clear(sess);
> > }
> >
> > static unsigned int
> > @@ -576,7 +556,6 @@ static __rte_always_inline void * __rte_hot
> > otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
> > struct rte_crypto_op *op)
> > {
> > - const int driver_id = otx_cryptodev_driver_id;
> > struct rte_crypto_sym_op *sym_op = op->sym;
> > struct rte_cryptodev_sym_session *sess;
> > void *req;
> > @@ -589,8 +568,12 @@ otx_cpt_enq_single_sym_sessless(struct
> > cpt_instance *instance,
> > return NULL;
> > }
> >
> > - ret = sym_session_configure(driver_id, sym_op->xform, sess,
> > - instance->sess_mp_priv);
> > + sess->sess_data[otx_cryptodev_driver_id].data =
> > + (void *)((uint8_t *)sess +
> > + rte_cryptodev_sym_get_header_session_size() +
> > + (otx_cryptodev_driver_id * sess->priv_sz));
> > + ret = sym_session_configure(sym_op->xform,
> > + sess->sess_data[otx_cryptodev_driver_id].data);
> > if (ret)
> > goto sess_put;
> >
> > @@ -604,7 +587,7 @@ otx_cpt_enq_single_sym_sessless(struct
> > cpt_instance *instance,
> > return req;
> >
> > priv_put:
> > - sym_session_clear(driver_id, sess);
> > + sym_session_clear(sess);
> > sess_put:
> > rte_mempool_put(instance->sess_mp, sess);
> > return NULL;
> > @@ -913,7 +896,6 @@ free_sym_session_data(const struct cpt_instance
> > *instance,
> > memset(cop->sym->session, 0,
> > rte_cryptodev_sym_get_existing_header_session_size(
> > cop->sym->session));
> > - rte_mempool_put(instance->sess_mp_priv, sess_private_data_t);
> > rte_mempool_put(instance->sess_mp, cop->sym->session);
> > cop->sym->session = NULL;
> > }
> > diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > index 7b744cd4b4..dcfbc49996 100644
> > --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > @@ -371,29 +371,21 @@ sym_xform_verify(struct rte_crypto_sym_xform
> > *xform)
> > }
> >
> > static int
> > -sym_session_configure(int driver_id, struct rte_crypto_sym_xform
> *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *pool)
> > +sym_session_configure(struct rte_crypto_sym_xform *xform, void *sess)
> > {
> > struct rte_crypto_sym_xform *temp_xform = xform;
> > struct cpt_sess_misc *misc;
> > vq_cmd_word3_t vq_cmd_w3;
> > - void *priv;
> > int ret;
> >
> > ret = sym_xform_verify(xform);
> > if (unlikely(ret))
> > return ret;
> >
> > - if (unlikely(rte_mempool_get(pool, &priv))) {
> > - CPT_LOG_ERR("Could not allocate session private data");
> > - return -ENOMEM;
> > - }
> > -
> > - memset(priv, 0, sizeof(struct cpt_sess_misc) +
> > + memset(sess, 0, sizeof(struct cpt_sess_misc) +
> > offsetof(struct cpt_ctx, mc_ctx));
> >
> > - misc = priv;
> > + misc = sess;
> >
> > for ( ; xform != NULL; xform = xform->next) {
> > switch (xform->type) {
> > @@ -414,7 +406,7 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> > }
> >
> > if (ret)
> > - goto priv_put;
> > + return -ENOTSUP;
> > }
> >
> > if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
> > @@ -425,12 +417,9 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> > rte_free(ctx->auth_key);
> > ctx->auth_key = NULL;
> > }
> > - ret = -ENOTSUP;
> > - goto priv_put;
> > + return -ENOTSUP;
> > }
> >
> > - set_sym_session_private_data(sess, driver_id, misc);
> > -
> > misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
> > sizeof(struct cpt_sess_misc);
> >
> > @@ -451,11 +440,6 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> > misc->cpt_inst_w7 = vq_cmd_w3.u64;
> >
> > return 0;
> > -
> > -priv_put:
> > - rte_mempool_put(pool, priv);
> > -
> > - return -ENOTSUP;
> > }
> >
> > static __rte_always_inline int32_t __rte_hot
> > @@ -765,7 +749,6 @@ otx2_cpt_enqueue_sym_sessless(struct
> otx2_cpt_qp
> > *qp, struct rte_crypto_op *op,
> > struct pending_queue *pend_q,
> > unsigned int burst_index)
> > {
> > - const int driver_id = otx2_cryptodev_driver_id;
> > struct rte_crypto_sym_op *sym_op = op->sym;
> > struct rte_cryptodev_sym_session *sess;
> > int ret;
> > @@ -775,8 +758,12 @@ otx2_cpt_enqueue_sym_sessless(struct
> > otx2_cpt_qp *qp, struct rte_crypto_op *op,
> > if (sess == NULL)
> > return -ENOMEM;
> >
> > - ret = sym_session_configure(driver_id, sym_op->xform, sess,
> > - qp->sess_mp_priv);
> > + sess->sess_data[otx2_cryptodev_driver_id].data =
> > + (void *)((uint8_t *)sess +
> > + rte_cryptodev_sym_get_header_session_size() +
> > + (otx2_cryptodev_driver_id * sess->priv_sz));
> > + ret = sym_session_configure(sym_op->xform,
> > + sess->sess_data[otx2_cryptodev_driver_id].data);
> > if (ret)
> > goto sess_put;
> >
> > @@ -790,7 +777,7 @@ otx2_cpt_enqueue_sym_sessless(struct
> otx2_cpt_qp
> > *qp, struct rte_crypto_op *op,
> > return 0;
> >
> > priv_put:
> > - sym_session_clear(driver_id, sess);
> > + sym_session_clear(sess);
> > sess_put:
> > rte_mempool_put(qp->sess_mp, sess);
> > return ret;
> > @@ -1035,8 +1022,7 @@ otx2_cpt_dequeue_post_process(struct
> > otx2_cpt_qp *qp, struct rte_crypto_op *cop,
> > }
> >
> > if (unlikely(cop->sess_type ==
> > RTE_CRYPTO_OP_SESSIONLESS)) {
> > - sym_session_clear(otx2_cryptodev_driver_id,
> > - cop->sym->session);
> > + sym_session_clear(cop->sym->session);
> > sz =
> > rte_cryptodev_sym_get_existing_header_session_size(
> > cop->sym->session);
> > memset(cop->sym->session, 0, sz);
> > @@ -1291,7 +1277,6 @@ otx2_cpt_queue_pair_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> > }
> >
> > qp->sess_mp = conf->mp_session;
> > - qp->sess_mp_priv = conf->mp_session_private;
> > dev->data->queue_pairs[qp_id] = qp;
> >
> > return 0;
> > @@ -1330,21 +1315,22 @@ otx2_cpt_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *pool)
> > + void *sess)
> > {
> > CPT_PMD_INIT_FUNC_TRACE();
> > + RTE_SET_USED(dev);
> >
> > - return sym_session_configure(dev->driver_id, xform, sess, pool);
> > + return sym_session_configure(xform, sess);
> > }
> >
> > static void
> > otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > + void *sess)
> > {
> > CPT_PMD_INIT_FUNC_TRACE();
> > + RTE_SET_USED(dev);
> >
> > - return sym_session_clear(dev->driver_id, sess);
> > + return sym_session_clear(sess);
> > }
> >
> > static unsigned int
> > diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> > b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> > index 01c081a216..5f63eaf7b7 100644
> > --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> > +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> > @@ -8,29 +8,21 @@
> > #include "cpt_pmd_logs.h"
> >
> > static void
> > -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> > +sym_session_clear(void *sess)
> > {
> > - void *priv = get_sym_session_private_data(sess, driver_id);
> > struct cpt_sess_misc *misc;
> > - struct rte_mempool *pool;
> > struct cpt_ctx *ctx;
> >
> > - if (priv == NULL)
> > + if (sess == NULL)
> > return;
> >
> > - misc = priv;
> > + misc = sess;
> > ctx = SESS_PRIV(misc);
> >
> > if (ctx->auth_key != NULL)
> > rte_free(ctx->auth_key);
> >
> > - memset(priv, 0, cpt_get_session_size());
> > -
> > - pool = rte_mempool_from_obj(priv);
> > -
> > - set_sym_session_private_data(sess, driver_id, NULL);
> > -
> > - rte_mempool_put(pool, priv);
> > + memset(sess, 0, cpt_get_session_size());
> > }
> >
> > static __rte_always_inline uint8_t
> > diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> > b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> > index 52715f86f8..1b48a6b400 100644
> > --- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> > +++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> > @@ -741,7 +741,6 @@ openssl_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> > goto qp_setup_cleanup;
> >
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->stats, 0, sizeof(qp->stats));
> >
> > @@ -772,10 +771,8 @@ openssl_pmd_asym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > openssl_pmd_sym_session_configure(struct rte_cryptodev *dev
> > __rte_unused,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> >
> > if (unlikely(sess == NULL)) {
> > @@ -783,24 +780,12 @@ openssl_pmd_sym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - OPENSSL_LOG(ERR,
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > - ret = openssl_set_session_parameters(sess_private_data, xform);
> > + ret = openssl_set_session_parameters(sess, xform);
> > if (ret != 0) {
> > OPENSSL_LOG(ERR, "failed configure session parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > @@ -1154,19 +1139,13 @@ openssl_pmd_asym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -openssl_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +openssl_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > + RTE_SET_USED(dev);
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > - openssl_reset_session(sess_priv);
> > - memset(sess_priv, 0, sizeof(struct openssl_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > + if (sess) {
> > + openssl_reset_session(sess);
> > + memset(sess, 0, sizeof(struct openssl_session));
> > }
> > }
> >
> > diff --git a/drivers/crypto/qat/qat_sym_session.c
> > b/drivers/crypto/qat/qat_sym_session.c
> > index 2a22347c7f..114bf081c1 100644
> > --- a/drivers/crypto/qat/qat_sym_session.c
> > +++ b/drivers/crypto/qat/qat_sym_session.c
> > @@ -172,21 +172,14 @@ qat_is_auth_alg_supported(enum
> > rte_crypto_auth_algorithm algo,
> > }
> >
> > void
> > -qat_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +qat_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > - struct qat_sym_session *s = (struct qat_sym_session *)sess_priv;
> > + struct qat_sym_session *s = (struct qat_sym_session *)sess;
> >
> > - if (sess_priv) {
> > + if (sess) {
> > if (s->bpi_ctx)
> > bpi_cipher_ctx_free(s->bpi_ctx);
> > memset(s, 0, qat_sym_session_get_private_size(dev));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > }
> > }
> >
> > @@ -458,31 +451,17 @@ qat_sym_session_configure_cipher(struct
> > rte_cryptodev *dev,
> > int
> > qat_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess_private_data)
> > {
> > - void *sess_private_data;
> > int ret;
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - CDEV_LOG_ERR(
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > ret = qat_sym_session_set_parameters(dev, xform,
> > sess_private_data);
> > if (ret != 0) {
> > QAT_LOG(ERR,
> > "Crypto QAT PMD: failed to configure session
> parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > diff --git a/drivers/crypto/qat/qat_sym_session.h
> > b/drivers/crypto/qat/qat_sym_session.h
> > index 7fcc1d6f7b..6da29e2305 100644
> > --- a/drivers/crypto/qat/qat_sym_session.h
> > +++ b/drivers/crypto/qat/qat_sym_session.h
> > @@ -112,8 +112,7 @@ struct qat_sym_session {
> > int
> > qat_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool);
> > + void *sess);
> >
> > int
> > qat_sym_session_set_parameters(struct rte_cryptodev *dev,
> > @@ -135,8 +134,7 @@ qat_sym_session_configure_auth(struct
> > rte_cryptodev *dev,
> > struct qat_sym_session *session);
> >
> > void
> > -qat_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *session);
> > +qat_sym_session_clear(struct rte_cryptodev *dev, void *session);
> >
> > unsigned int
> > qat_sym_session_get_private_size(struct rte_cryptodev *dev);
> > diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> > b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> > index 465b88ade8..87260b5a22 100644
> > --- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> > +++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> > @@ -476,9 +476,7 @@ scheduler_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >
> > static int
> > scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > - struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + struct rte_crypto_sym_xform *xform, void *sess)
> > {
> > struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> > uint32_t i;
> > @@ -488,7 +486,7 @@ scheduler_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> > struct scheduler_worker *worker = &sched_ctx->workers[i];
> >
> > ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> > - xform, mempool);
> > + xform);
> > if (ret < 0) {
> > CR_SCHED_LOG(ERR, "unable to config sym
> session");
> > return ret;
> > @@ -500,8 +498,7 @@ scheduler_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> > uint32_t i;
> > diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> > b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> > index 3f46014b7d..b0f8f6d86a 100644
> > --- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> > +++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> > @@ -226,7 +226,6 @@ snow3g_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> >
> > qp->mgr = internals->mgr;
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -250,10 +249,8 @@ snow3g_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > snow3g_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> > struct snow3g_private *internals = dev->data->dev_private;
> >
> > @@ -262,43 +259,24 @@ snow3g_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - SNOW3G_LOG(ERR,
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > ret = snow3g_set_session_parameters(internals->mgr,
> > - sess_private_data, xform);
> > + sess, xform);
> > if (ret != 0) {
> > SNOW3G_LOG(ERR, "failed configure session parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -snow3g_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +snow3g_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > + RTE_SET_USED(dev);
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > - memset(sess_priv, 0, sizeof(struct snow3g_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > - }
> > + if (sess)
> > + memset(sess, 0, sizeof(struct snow3g_session));
> > }
> >
> > struct rte_cryptodev_ops snow3g_pmd_ops = {
> > diff --git a/drivers/crypto/virtio/virtio_cryptodev.c
> > b/drivers/crypto/virtio/virtio_cryptodev.c
> > index 8faa39df4a..de52fec32e 100644
> > --- a/drivers/crypto/virtio/virtio_cryptodev.c
> > +++ b/drivers/crypto/virtio/virtio_cryptodev.c
> > @@ -37,11 +37,10 @@ static void virtio_crypto_dev_free_mbufs(struct
> > rte_cryptodev *dev);
> > static unsigned int virtio_crypto_sym_get_session_private_size(
> > struct rte_cryptodev *dev);
> > static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess);
> > + void *sess);
> > static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *session,
> > - struct rte_mempool *mp);
> > + void *session);
> >
> > /*
> > * The set of PCI devices this driver supports
> > @@ -927,7 +926,7 @@ virtio_crypto_check_sym_clear_session_paras(
> > static void
> > virtio_crypto_sym_clear_session(
> > struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > + void *sess)
> > {
> > struct virtio_crypto_hw *hw;
> > struct virtqueue *vq;
> > @@ -1290,11 +1289,9 @@ static int
> > virtio_crypto_check_sym_configure_session_paras(
> > struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sym_sess,
> > - struct rte_mempool *mempool)
> > + void *sym_sess)
> > {
> > - if (unlikely(xform == NULL) || unlikely(sym_sess == NULL) ||
> > - unlikely(mempool == NULL)) {
> > + if (unlikely(xform == NULL) || unlikely(sym_sess == NULL)) {
> > VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
> > return -1;
> > }
> > @@ -1309,12 +1306,9 @@ static int
> > virtio_crypto_sym_configure_session(
> > struct rte_cryptodev *dev,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > int ret;
> > - struct virtio_crypto_session crypto_sess;
> > - void *session_private = &crypto_sess;
> > struct virtio_crypto_session *session;
> > struct virtio_crypto_op_ctrl_req *ctrl_req;
> > enum virtio_crypto_cmd_id cmd_id;
> > @@ -1326,19 +1320,13 @@ virtio_crypto_sym_configure_session(
> > PMD_INIT_FUNC_TRACE();
> >
> > ret = virtio_crypto_check_sym_configure_session_paras(dev, xform,
> > - sess, mempool);
> > + sess);
> > if (ret < 0) {
> > VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
> > return ret;
> > }
> >
> > - if (rte_mempool_get(mempool, &session_private)) {
> > - VIRTIO_CRYPTO_SESSION_LOG_ERR(
> > - "Couldn't get object from session mempool");
> > - return -ENOMEM;
> > - }
> > -
> > - session = (struct virtio_crypto_session *)session_private;
> > + session = (struct virtio_crypto_session *)sess;
> > memset(session, 0, sizeof(struct virtio_crypto_session));
> > ctrl_req = &session->ctrl;
> > ctrl_req->header.opcode =
> > VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
> > @@ -1401,9 +1389,6 @@ virtio_crypto_sym_configure_session(
> > goto error_out;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - session_private);
> > -
> > return 0;
> >
> > error_out:
> > diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> > b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> > index 38642d45ab..04126c8a04 100644
> > --- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> > +++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> > @@ -226,7 +226,6 @@ zuc_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> >
> > qp->mb_mgr = internals->mb_mgr;
> > qp->sess_mp = qp_conf->mp_session;
> > - qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> > memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -250,10 +249,8 @@ zuc_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> > static int
> > zuc_pmd_sym_session_configure(struct rte_cryptodev *dev
> __rte_unused,
> > struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_mempool *mempool)
> > + void *sess)
> > {
> > - void *sess_private_data;
> > int ret;
> >
> > if (unlikely(sess == NULL)) {
> > @@ -261,43 +258,23 @@ zuc_pmd_sym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> > return -EINVAL;
> > }
> >
> > - if (rte_mempool_get(mempool, &sess_private_data)) {
> > - ZUC_LOG(ERR,
> > - "Couldn't get object from session mempool");
> > -
> > - return -ENOMEM;
> > - }
> > -
> > - ret = zuc_set_session_parameters(sess_private_data, xform);
> > + ret = zuc_set_session_parameters(sess, xform);
> > if (ret != 0) {
> > ZUC_LOG(ERR, "failed configure session parameters");
> > -
> > - /* Return session to mempool */
> > - rte_mempool_put(mempool, sess_private_data);
> > return ret;
> > }
> >
> > - set_sym_session_private_data(sess, dev->driver_id,
> > - sess_private_data);
> > -
> > return 0;
> > }
> >
> > /** Clear the memory of session so it doesn't leave key material behind */
> > static void
> > -zuc_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess)
> > +zuc_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> > {
> > - uint8_t index = dev->driver_id;
> > - void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > + RTE_SET_USED(dev);
> > /* Zero out the whole structure */
> > - if (sess_priv) {
> > - memset(sess_priv, 0, sizeof(struct zuc_session));
> > - struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > - set_sym_session_private_data(sess, index, NULL);
> > - rte_mempool_put(sess_mp, sess_priv);
> > - }
> > + if (sess)
> > + memset(sess, 0, sizeof(struct zuc_session));
> > }
> >
> > struct rte_cryptodev_ops zuc_pmd_ops = {
> > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> > index b33cb7e139..8522f2dfda 100644
> > --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> > @@ -38,8 +38,7 @@ otx2_ca_deq_post_process(const struct otx2_cpt_qp
> > *qp,
> > }
> >
> > if (unlikely(cop->sess_type ==
> > RTE_CRYPTO_OP_SESSIONLESS)) {
> > - sym_session_clear(otx2_cryptodev_driver_id,
> > - cop->sym->session);
> > + sym_session_clear(cop->sym->session);
> > memset(cop->sym->session, 0,
> >
> > rte_cryptodev_sym_get_existing_header_session_size(
> > cop->sym->session));
> > diff --git a/examples/fips_validation/fips_dev_self_test.c
> > b/examples/fips_validation/fips_dev_self_test.c
> > index b4eab05a98..bbc27a1b6f 100644
> > --- a/examples/fips_validation/fips_dev_self_test.c
> > +++ b/examples/fips_validation/fips_dev_self_test.c
> > @@ -969,7 +969,6 @@ struct fips_dev_auto_test_env {
> > struct rte_mempool *mpool;
> > struct rte_mempool *op_pool;
> > struct rte_mempool *sess_pool;
> > - struct rte_mempool *sess_priv_pool;
> > struct rte_mbuf *mbuf;
> > struct rte_crypto_op *op;
> > };
> > @@ -981,7 +980,7 @@ typedef int
> > (*fips_dev_self_test_prepare_xform_t)(uint8_t,
> > uint32_t);
> >
> > typedef int (*fips_dev_self_test_prepare_op_t)(struct rte_crypto_op *,
> > - struct rte_mbuf *, struct rte_cryptodev_sym_session *,
> > + struct rte_mbuf *, void *,
> > uint32_t, struct fips_dev_self_test_vector *);
> >
> > typedef int (*fips_dev_self_test_check_result_t)(struct rte_crypto_op *,
> > @@ -1173,7 +1172,7 @@ prepare_aead_xform(uint8_t dev_id,
> > static int
> > prepare_cipher_op(struct rte_crypto_op *op,
> > struct rte_mbuf *mbuf,
> > - struct rte_cryptodev_sym_session *session,
> > + void *session,
> > uint32_t dir,
> > struct fips_dev_self_test_vector *vec)
> > {
> > @@ -1212,7 +1211,7 @@ prepare_cipher_op(struct rte_crypto_op *op,
> > static int
> > prepare_auth_op(struct rte_crypto_op *op,
> > struct rte_mbuf *mbuf,
> > - struct rte_cryptodev_sym_session *session,
> > + void *session,
> > uint32_t dir,
> > struct fips_dev_self_test_vector *vec)
> > {
> > @@ -1251,7 +1250,7 @@ prepare_auth_op(struct rte_crypto_op *op,
> > static int
> > prepare_aead_op(struct rte_crypto_op *op,
> > struct rte_mbuf *mbuf,
> > - struct rte_cryptodev_sym_session *session,
> > + void *session,
> > uint32_t dir,
> > struct fips_dev_self_test_vector *vec)
> > {
> > @@ -1464,7 +1463,7 @@ run_single_test(uint8_t dev_id,
> > uint32_t negative_test)
> > {
> > struct rte_crypto_sym_xform xform;
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> > uint16_t n_deqd;
> > uint8_t key[256];
> > int ret;
> > @@ -1484,8 +1483,7 @@ run_single_test(uint8_t dev_id,
> > if (!sess)
> > return -ENOMEM;
> >
> > - ret = rte_cryptodev_sym_session_init(dev_id,
> > - sess, &xform, env->sess_priv_pool);
> > + ret = rte_cryptodev_sym_session_init(dev_id, sess, &xform);
> > if (ret < 0) {
> > RTE_LOG(ERR, PMD, "Error %i: Init session\n", ret);
> > return ret;
> > @@ -1533,8 +1531,6 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
> > rte_mempool_free(env->op_pool);
> > if (env->sess_pool)
> > rte_mempool_free(env->sess_pool);
> > - if (env->sess_priv_pool)
> > - rte_mempool_free(env->sess_priv_pool);
> >
> > rte_cryptodev_stop(dev_id);
> > }
> > @@ -1542,7 +1538,7 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
> > static int
> > fips_dev_auto_test_init(uint8_t dev_id, struct fips_dev_auto_test_env
> > *env)
> > {
> > - struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
> > + struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
> > uint32_t sess_sz =
> > rte_cryptodev_sym_get_private_session_size(dev_id);
> > struct rte_cryptodev_config conf;
> > char name[128];
> > @@ -1586,25 +1582,13 @@ fips_dev_auto_test_init(uint8_t dev_id, struct
> > fips_dev_auto_test_env *env)
> > snprintf(name, 128, "%s%u", "SELF_TEST_SESS_POOL", dev_id);
> >
> > env->sess_pool = rte_cryptodev_sym_session_pool_create(name,
> > - 128, 0, 0, 0, rte_cryptodev_socket_id(dev_id));
> > + 128, sess_sz, 0, 0, rte_cryptodev_socket_id(dev_id));
> > if (!env->sess_pool) {
> > ret = -ENOMEM;
> > goto error_exit;
> > }
> >
> > - memset(name, 0, 128);
> > - snprintf(name, 128, "%s%u", "SELF_TEST_SESS_PRIV_POOL", dev_id);
> > -
> > - env->sess_priv_pool = rte_mempool_create(name,
> > - 128, sess_sz, 0, 0, NULL, NULL, NULL,
> > - NULL, rte_cryptodev_socket_id(dev_id), 0);
> > - if (!env->sess_priv_pool) {
> > - ret = -ENOMEM;
> > - goto error_exit;
> > - }
> > -
> > qp_conf.mp_session = env->sess_pool;
> > - qp_conf.mp_session_private = env->sess_priv_pool;
> >
> > ret = rte_cryptodev_queue_pair_setup(dev_id, 0, &qp_conf,
> > rte_cryptodev_socket_id(dev_id));
> > diff --git a/examples/fips_validation/main.c
> > b/examples/fips_validation/main.c
> > index a8daad1f48..03c6ccb5b8 100644
> > --- a/examples/fips_validation/main.c
> > +++ b/examples/fips_validation/main.c
> > @@ -48,13 +48,12 @@ struct cryptodev_fips_validate_env {
> > uint16_t mbuf_data_room;
> > struct rte_mempool *mpool;
> > struct rte_mempool *sess_mpool;
> > - struct rte_mempool *sess_priv_mpool;
> > struct rte_mempool *op_pool;
> > struct rte_mbuf *mbuf;
> > uint8_t *digest;
> > uint16_t digest_len;
> > struct rte_crypto_op *op;
> > - struct rte_cryptodev_sym_session *sess;
> > + void *sess;
> > uint16_t self_test;
> > struct fips_dev_broken_test_config *broken_test_config;
> > } env;
> > @@ -63,7 +62,7 @@ static int
> > cryptodev_fips_validate_app_int(void)
> > {
> > struct rte_cryptodev_config conf = {rte_socket_id(), 1, 0};
> > - struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
> > + struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
> > struct rte_cryptodev_info dev_info;
> > uint32_t sess_sz = rte_cryptodev_sym_get_private_session_size(
> > env.dev_id);
> > @@ -103,16 +102,11 @@ cryptodev_fips_validate_app_int(void)
> > ret = -ENOMEM;
> >
> > env.sess_mpool = rte_cryptodev_sym_session_pool_create(
> > - "FIPS_SESS_MEMPOOL", 16, 0, 0, 0, rte_socket_id());
> > + "FIPS_SESS_MEMPOOL", 16, sess_sz, 0, 0,
> > + rte_socket_id());
> > if (!env.sess_mpool)
> > goto error_exit;
> >
> > - env.sess_priv_mpool =
> > rte_mempool_create("FIPS_SESS_PRIV_MEMPOOL",
> > - 16, sess_sz, 0, 0, NULL, NULL, NULL,
> > - NULL, rte_socket_id(), 0);
> > - if (!env.sess_priv_mpool)
> > - goto error_exit;
> > -
> > env.op_pool = rte_crypto_op_pool_create(
> > "FIPS_OP_POOL",
> > RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> > @@ -127,7 +121,6 @@ cryptodev_fips_validate_app_int(void)
> > goto error_exit;
> >
> > qp_conf.mp_session = env.sess_mpool;
> > - qp_conf.mp_session_private = env.sess_priv_mpool;
> >
> > ret = rte_cryptodev_queue_pair_setup(env.dev_id, 0, &qp_conf,
> > rte_socket_id());
> > @@ -141,8 +134,6 @@ cryptodev_fips_validate_app_int(void)
> > rte_mempool_free(env.mpool);
> > if (env.sess_mpool)
> > rte_mempool_free(env.sess_mpool);
> > - if (env.sess_priv_mpool)
> > - rte_mempool_free(env.sess_priv_mpool);
> > if (env.op_pool)
> > rte_mempool_free(env.op_pool);
> >
> > @@ -158,7 +149,6 @@ cryptodev_fips_validate_app_uninit(void)
> > rte_cryptodev_sym_session_free(env.sess);
> > rte_mempool_free(env.mpool);
> > rte_mempool_free(env.sess_mpool);
> > - rte_mempool_free(env.sess_priv_mpool);
> > rte_mempool_free(env.op_pool);
> > }
> >
> > @@ -1179,7 +1169,7 @@ fips_run_test(void)
> > return -ENOMEM;
> >
> > ret = rte_cryptodev_sym_session_init(env.dev_id,
> > - env.sess, &xform, env.sess_priv_mpool);
> > + env.sess, &xform);
> > if (ret < 0) {
> > RTE_LOG(ERR, USER1, "Error %i: Init session\n",
> > ret);
> > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-
> > secgw/ipsec-secgw.c
> > index 7ad94cb822..65528ee2e7 100644
> > --- a/examples/ipsec-secgw/ipsec-secgw.c
> > +++ b/examples/ipsec-secgw/ipsec-secgw.c
> > @@ -1216,15 +1216,11 @@ ipsec_poll_mode_worker(void)
> > qconf->inbound.sa_ctx = socket_ctx[socket_id].sa_in;
> > qconf->inbound.cdev_map = cdev_map_in;
> > qconf->inbound.session_pool = socket_ctx[socket_id].session_pool;
> > - qconf->inbound.session_priv_pool =
> > - socket_ctx[socket_id].session_priv_pool;
> > qconf->outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
> > qconf->outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
> > qconf->outbound.sa_ctx = socket_ctx[socket_id].sa_out;
> > qconf->outbound.cdev_map = cdev_map_out;
> > qconf->outbound.session_pool =
> > socket_ctx[socket_id].session_pool;
> > - qconf->outbound.session_priv_pool =
> > - socket_ctx[socket_id].session_priv_pool;
> > qconf->frag.pool_dir = socket_ctx[socket_id].mbuf_pool;
> > qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir;
> >
> > @@ -2142,8 +2138,6 @@ cryptodevs_init(uint16_t req_queue_num)
> > qp_conf.nb_descriptors = CDEV_QUEUE_DESC;
> > qp_conf.mp_session =
> > socket_ctx[dev_conf.socket_id].session_pool;
> > - qp_conf.mp_session_private =
> > - socket_ctx[dev_conf.socket_id].session_priv_pool;
> > for (qp = 0; qp < dev_conf.nb_queue_pairs; qp++)
> > if (rte_cryptodev_queue_pair_setup(cdev_id, qp,
> > &qp_conf, dev_conf.socket_id))
> > @@ -2405,37 +2399,37 @@ session_pool_init(struct socket_ctx *ctx,
> int32_t
> > socket_id, size_t sess_sz)
> > printf("Allocated session pool on socket %d\n",
> > socket_id);
> > }
> >
> > -static void
> > -session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
> > - size_t sess_sz)
> > -{
> > - char mp_name[RTE_MEMPOOL_NAMESIZE];
> > - struct rte_mempool *sess_mp;
> > - uint32_t nb_sess;
> > -
> > - snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > - "sess_mp_priv_%u", socket_id);
> > - nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> > - rte_lcore_count());
> > - nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
> > - CDEV_MP_CACHE_MULTIPLIER);
> > - sess_mp = rte_mempool_create(mp_name,
> > - nb_sess,
> > - sess_sz,
> > - CDEV_MP_CACHE_SZ,
> > - 0, NULL, NULL, NULL,
> > - NULL, socket_id,
> > - 0);
> > - ctx->session_priv_pool = sess_mp;
> > -
> > - if (ctx->session_priv_pool == NULL)
> > - rte_exit(EXIT_FAILURE,
> > - "Cannot init session priv pool on socket %d\n",
> > - socket_id);
> > - else
> > - printf("Allocated session priv pool on socket %d\n",
> > - socket_id);
> > -}
> > +//static void
> > +//session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
> > +// size_t sess_sz)
> > +//{
> > +// char mp_name[RTE_MEMPOOL_NAMESIZE];
> > +// struct rte_mempool *sess_mp;
> > +// uint32_t nb_sess;
> > +//
> > +// snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > +// "sess_mp_priv_%u", socket_id);
> > +// nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> > +// rte_lcore_count());
> > +// nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
> > +// CDEV_MP_CACHE_MULTIPLIER);
> > +// sess_mp = rte_mempool_create(mp_name,
> > +// nb_sess,
> > +// sess_sz,
> > +// CDEV_MP_CACHE_SZ,
> > +// 0, NULL, NULL, NULL,
> > +// NULL, socket_id,
> > +// 0);
> > +// ctx->session_priv_pool = sess_mp;
> > +//
> > +// if (ctx->session_priv_pool == NULL)
> > +// rte_exit(EXIT_FAILURE,
> > +// "Cannot init session priv pool on socket %d\n",
> > +// socket_id);
> > +// else
> > +// printf("Allocated session priv pool on socket %d\n",
> > +// socket_id);
> > +//}
> >
> > static void
> > pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf)
> > @@ -2938,8 +2932,8 @@ main(int32_t argc, char **argv)
> >
> > pool_init(&socket_ctx[socket_id], socket_id,
> > nb_bufs_in_pool);
> > session_pool_init(&socket_ctx[socket_id], socket_id,
> > sess_sz);
> > - session_priv_pool_init(&socket_ctx[socket_id], socket_id,
> > - sess_sz);
> > +// session_priv_pool_init(&socket_ctx[socket_id], socket_id,
> > +// sess_sz);
> > }
> > printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool);
> >
> > diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
> > index 03d907cba8..a5921de11c 100644
> > --- a/examples/ipsec-secgw/ipsec.c
> > +++ b/examples/ipsec-secgw/ipsec.c
> > @@ -143,8 +143,7 @@ create_lookaside_session(struct ipsec_ctx
> *ipsec_ctx,
> > struct ipsec_sa *sa,
> > ips->crypto.ses = rte_cryptodev_sym_session_create(
> > ipsec_ctx->session_pool);
> > rte_cryptodev_sym_session_init(ipsec_ctx-
> > >tbl[cdev_id_qp].id,
> > - ips->crypto.ses, sa->xforms,
> > - ipsec_ctx->session_priv_pool);
> > + ips->crypto.ses, sa->xforms);
> >
> > rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
> > &cdev_info);
> > diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
> > index 8405c48171..673c64e8dc 100644
> > --- a/examples/ipsec-secgw/ipsec.h
> > +++ b/examples/ipsec-secgw/ipsec.h
> > @@ -243,7 +243,6 @@ struct socket_ctx {
> > struct rte_mempool *mbuf_pool;
> > struct rte_mempool *mbuf_pool_indir;
> > struct rte_mempool *session_pool;
> > - struct rte_mempool *session_priv_pool;
> > };
> >
> > struct cnt_blk {
> > diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-
> > secgw/ipsec_worker.c
> > index c545497cee..04bcce49db 100644
> > --- a/examples/ipsec-secgw/ipsec_worker.c
> > +++ b/examples/ipsec-secgw/ipsec_worker.c
> > @@ -537,14 +537,10 @@
> ipsec_wrkr_non_burst_int_port_app_mode(struct
> > eh_event_link_info *links,
> > lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in;
> > lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in;
> > lconf.inbound.session_pool = socket_ctx[socket_id].session_pool;
> > - lconf.inbound.session_priv_pool =
> > - socket_ctx[socket_id].session_priv_pool;
> > lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
> > lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
> > lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out;
> > lconf.outbound.session_pool = socket_ctx[socket_id].session_pool;
> > - lconf.outbound.session_priv_pool =
> > - socket_ctx[socket_id].session_priv_pool;
> >
> > RTE_LOG(INFO, IPSEC,
> > "Launching event mode worker (non-burst - Tx internal port -
> > "
> > diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> > index 66d1491bf7..bdc3da731c 100644
> > --- a/examples/l2fwd-crypto/main.c
> > +++ b/examples/l2fwd-crypto/main.c
> > @@ -188,7 +188,7 @@ struct l2fwd_crypto_params {
> > struct l2fwd_iv auth_iv;
> > struct l2fwd_iv aead_iv;
> > struct l2fwd_key aad;
> > - struct rte_cryptodev_sym_session *session;
> > + void *session;
> >
> > uint8_t do_cipher;
> > uint8_t do_hash;
> > @@ -229,7 +229,6 @@ struct rte_mempool *l2fwd_pktmbuf_pool;
> > struct rte_mempool *l2fwd_crypto_op_pool;
> > static struct {
> > struct rte_mempool *sess_mp;
> > - struct rte_mempool *priv_mp;
> > } session_pool_socket[RTE_MAX_NUMA_NODES];
> >
> > /* Per-port statistics struct */
> > @@ -671,11 +670,11 @@ generate_random_key(uint8_t *key, unsigned
> > length)
> > }
> >
> > /* Session is created and is later attached to the crypto operation. 8< */
> > -static struct rte_cryptodev_sym_session *
> > +static void *
> > initialize_crypto_session(struct l2fwd_crypto_options *options, uint8_t
> > cdev_id)
> > {
> > struct rte_crypto_sym_xform *first_xform;
> > - struct rte_cryptodev_sym_session *session;
> > + void *session;
> > int retval = rte_cryptodev_socket_id(cdev_id);
> >
> > if (retval < 0)
> > @@ -703,8 +702,7 @@ initialize_crypto_session(struct
> l2fwd_crypto_options
> > *options, uint8_t cdev_id)
> > return NULL;
> >
> > if (rte_cryptodev_sym_session_init(cdev_id, session,
> > - first_xform,
> > - session_pool_socket[socket_id].priv_mp) < 0)
> > + first_xform) < 0)
> > return NULL;
> >
> > return session;
> > @@ -730,7 +728,7 @@ l2fwd_main_loop(struct l2fwd_crypto_options
> > *options)
> > US_PER_S * BURST_TX_DRAIN_US;
> > struct l2fwd_crypto_params *cparams;
> > struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
> > - struct rte_cryptodev_sym_session *session;
> > + void *session;
> >
> > if (qconf->nb_rx_ports == 0) {
> > RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n",
> > lcore_id);
> > @@ -2388,30 +2386,6 @@ initialize_cryptodevs(struct
> l2fwd_crypto_options
> > *options, unsigned nb_ports,
> > } else
> > sessions_needed = enabled_cdev_count;
> >
> > - if (session_pool_socket[socket_id].priv_mp == NULL) {
> > - char mp_name[RTE_MEMPOOL_NAMESIZE];
> > -
> > - snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > - "priv_sess_mp_%u", socket_id);
> > -
> > - session_pool_socket[socket_id].priv_mp =
> > - rte_mempool_create(mp_name,
> > - sessions_needed,
> > - max_sess_sz,
> > - 0, 0, NULL, NULL, NULL,
> > - NULL, socket_id,
> > - 0);
> > -
> > - if (session_pool_socket[socket_id].priv_mp == NULL)
> > {
> > - printf("Cannot create pool on socket %d\n",
> > - socket_id);
> > - return -ENOMEM;
> > - }
> > -
> > - printf("Allocated pool \"%s\" on socket %d\n",
> > - mp_name, socket_id);
> > - }
> > -
> > if (session_pool_socket[socket_id].sess_mp == NULL) {
> > char mp_name[RTE_MEMPOOL_NAMESIZE];
> > snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > @@ -2421,7 +2395,8 @@ initialize_cryptodevs(struct
> l2fwd_crypto_options
> > *options, unsigned nb_ports,
> >
> > rte_cryptodev_sym_session_pool_create(
> > mp_name,
> > sessions_needed,
> > - 0, 0, 0, socket_id);
> > + max_sess_sz,
> > + 0, 0, socket_id);
> >
> > if (session_pool_socket[socket_id].sess_mp == NULL)
> > {
> > printf("Cannot create pool on socket %d\n",
> > @@ -2573,8 +2548,6 @@ initialize_cryptodevs(struct
> l2fwd_crypto_options
> > *options, unsigned nb_ports,
> >
> > qp_conf.nb_descriptors = 2048;
> > qp_conf.mp_session =
> > session_pool_socket[socket_id].sess_mp;
> > - qp_conf.mp_session_private =
> > - session_pool_socket[socket_id].priv_mp;
> >
> > retval = rte_cryptodev_queue_pair_setup(cdev_id, 0,
> > &qp_conf,
> > socket_id);
> > diff --git a/examples/vhost_crypto/main.c
> b/examples/vhost_crypto/main.c
> > index dea7dcbd07..cbb97aaf76 100644
> > --- a/examples/vhost_crypto/main.c
> > +++ b/examples/vhost_crypto/main.c
> > @@ -46,7 +46,6 @@ struct vhost_crypto_info {
> > int vids[MAX_NB_SOCKETS];
> > uint32_t nb_vids;
> > struct rte_mempool *sess_pool;
> > - struct rte_mempool *sess_priv_pool;
> > struct rte_mempool *cop_pool;
> > uint8_t cid;
> > uint32_t qid;
> > @@ -304,7 +303,6 @@ new_device(int vid)
> > }
> >
> > ret = rte_vhost_crypto_create(vid, info->cid, info->sess_pool,
> > - info->sess_priv_pool,
> > rte_lcore_to_socket_id(options.los[i].lcore_id));
> > if (ret) {
> > RTE_LOG(ERR, USER1, "Cannot create vhost crypto\n");
> > @@ -458,7 +456,6 @@ free_resource(void)
> >
> > rte_mempool_free(info->cop_pool);
> > rte_mempool_free(info->sess_pool);
> > - rte_mempool_free(info->sess_priv_pool);
> >
> > for (j = 0; j < lo->nb_sockets; j++) {
> > rte_vhost_driver_unregister(lo->socket_files[i]);
> > @@ -544,16 +541,12 @@ main(int argc, char *argv[])
> >
> > snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
> > info->sess_pool =
> > rte_cryptodev_sym_session_pool_create(name,
> > - SESSION_MAP_ENTRIES, 0, 0, 0,
> > - rte_lcore_to_socket_id(lo->lcore_id));
> > -
> > - snprintf(name, 127, "SESS_POOL_PRIV_%u", lo->lcore_id);
> > - info->sess_priv_pool = rte_mempool_create(name,
> > SESSION_MAP_ENTRIES,
> >
> > rte_cryptodev_sym_get_private_session_size(
> > - info->cid), 64, 0, NULL, NULL, NULL, NULL,
> > - rte_lcore_to_socket_id(lo->lcore_id), 0);
> > - if (!info->sess_priv_pool || !info->sess_pool) {
> > + info->cid), 0, 0,
> > + rte_lcore_to_socket_id(lo->lcore_id));
> > +
> > + if (!info->sess_pool) {
> > RTE_LOG(ERR, USER1, "Failed to create mempool");
> > goto error_exit;
> > }
> > @@ -574,7 +567,6 @@ main(int argc, char *argv[])
> >
> > qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
> > qp_conf.mp_session = info->sess_pool;
> > - qp_conf.mp_session_private = info->sess_priv_pool;
> >
> > for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
> > ret = rte_cryptodev_queue_pair_setup(info->cid, j,
> > diff --git a/lib/cryptodev/cryptodev_pmd.h
> > b/lib/cryptodev/cryptodev_pmd.h
> > index ae3aac59ae..d758b3b85d 100644
> > --- a/lib/cryptodev/cryptodev_pmd.h
> > +++ b/lib/cryptodev/cryptodev_pmd.h
> > @@ -242,7 +242,6 @@ typedef unsigned int
> > (*cryptodev_asym_get_session_private_size_t)(
> > * @param dev Crypto device pointer
> > * @param xform Single or chain of crypto xforms
> > * @param session Pointer to cryptodev's private session
> > structure
> > - * @param mp Mempool where the private session is
> > allocated
> > *
> > * @return
> > * - Returns 0 if private session structure have been created successfully.
> > @@ -251,9 +250,7 @@ typedef unsigned int
> > (*cryptodev_asym_get_session_private_size_t)(
> > * - Returns -ENOMEM if the private session could not be allocated.
> > */
> > typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev
> > *dev,
> > - struct rte_crypto_sym_xform *xform,
> > - struct rte_cryptodev_sym_session *session,
> > - struct rte_mempool *mp);
> > + struct rte_crypto_sym_xform *xform, void *session);
> > /**
> > * Configure a Crypto asymmetric session on a device.
> > *
> > @@ -279,7 +276,7 @@ typedef int
> > (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
> > * @param sess Cryptodev session structure
> > */
> > typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
> > - struct rte_cryptodev_sym_session *sess);
> > + void *sess);
> > /**
> > * Free asymmetric session private data.
> > *
> > diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
> > index a864f5036f..200617f623 100644
> > --- a/lib/cryptodev/rte_crypto.h
> > +++ b/lib/cryptodev/rte_crypto.h
> > @@ -420,7 +420,7 @@ rte_crypto_op_sym_xforms_alloc(struct
> > rte_crypto_op *op, uint8_t nb_xforms)
> > */
> > static inline int
> > rte_crypto_op_attach_sym_session(struct rte_crypto_op *op,
> > - struct rte_cryptodev_sym_session *sess)
> > + void *sess)
> > {
> > if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
> > return -1;
> > diff --git a/lib/cryptodev/rte_crypto_sym.h
> > b/lib/cryptodev/rte_crypto_sym.h
> > index 58c0724743..848da1942c 100644
> > --- a/lib/cryptodev/rte_crypto_sym.h
> > +++ b/lib/cryptodev/rte_crypto_sym.h
> > @@ -932,7 +932,7 @@ __rte_crypto_sym_op_sym_xforms_alloc(struct
> > rte_crypto_sym_op *sym_op,
> > */
> > static inline int
> > __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op
> > *sym_op,
> > - struct rte_cryptodev_sym_session *sess)
> > + void *sess)
> > {
> > sym_op->session = sess;
> >
> > diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> > index 9fa3aff1d3..a31cae202a 100644
> > --- a/lib/cryptodev/rte_cryptodev.c
> > +++ b/lib/cryptodev/rte_cryptodev.c
> > @@ -199,6 +199,8 @@ struct
> > rte_cryptodev_sym_session_pool_private_data {
> > /**< number of elements in sess_data array */
> > uint16_t user_data_sz;
> > /**< session user data will be placed after sess_data */
> > + uint16_t sess_priv_sz;
> > + /**< session user data will be placed after sess_data */
> > };
> >
> > int
> > @@ -1223,8 +1225,7 @@ rte_cryptodev_queue_pair_setup(uint8_t
> dev_id,
> > uint16_t queue_pair_id,
> > return -EINVAL;
> > }
> >
> > - if ((qp_conf->mp_session && !qp_conf->mp_session_private) ||
> > - (!qp_conf->mp_session && qp_conf-
> > >mp_session_private)) {
> > + if (!qp_conf->mp_session) {
> > CDEV_LOG_ERR("Invalid mempools\n");
> > return -EINVAL;
> > }
> > @@ -1232,7 +1233,6 @@ rte_cryptodev_queue_pair_setup(uint8_t
> dev_id,
> > uint16_t queue_pair_id,
> > if (qp_conf->mp_session) {
> > struct rte_cryptodev_sym_session_pool_private_data
> > *pool_priv;
> > uint32_t obj_size = qp_conf->mp_session->elt_size;
> > - uint32_t obj_priv_size = qp_conf->mp_session_private-
> > >elt_size;
> > struct rte_cryptodev_sym_session s = {0};
> >
> > pool_priv = rte_mempool_get_priv(qp_conf->mp_session);
> > @@ -1244,11 +1244,11 @@ rte_cryptodev_queue_pair_setup(uint8_t
> > dev_id, uint16_t queue_pair_id,
> >
> > s.nb_drivers = pool_priv->nb_drivers;
> > s.user_data_sz = pool_priv->user_data_sz;
> > + s.priv_sz = pool_priv->sess_priv_sz;
> >
> > - if
> > ((rte_cryptodev_sym_get_existing_header_session_size(&s) >
> > - obj_size) || (s.nb_drivers <= dev->driver_id) ||
> > -
> > rte_cryptodev_sym_get_private_session_size(dev_id) >
> > - obj_priv_size) {
> > + if
> > (((rte_cryptodev_sym_get_existing_header_session_size(&s) +
> > + (s.nb_drivers * s.priv_sz)) > obj_size) ||
> > + (s.nb_drivers <= dev->driver_id)) {
> > CDEV_LOG_ERR("Invalid mempool\n");
> > return -EINVAL;
> > }
> > @@ -1710,11 +1710,11 @@ rte_cryptodev_pmd_callback_process(struct
> > rte_cryptodev *dev,
> >
> > int
> > rte_cryptodev_sym_session_init(uint8_t dev_id,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_crypto_sym_xform *xforms,
> > - struct rte_mempool *mp)
> > + void *sess_opaque,
> > + struct rte_crypto_sym_xform *xforms)
> > {
> > struct rte_cryptodev *dev;
> > + struct rte_cryptodev_sym_session *sess = sess_opaque;
> > uint32_t sess_priv_sz =
> > rte_cryptodev_sym_get_private_session_size(
> > dev_id);
> > uint8_t index;
> > @@ -1727,10 +1727,10 @@ rte_cryptodev_sym_session_init(uint8_t
> dev_id,
> >
> > dev = rte_cryptodev_pmd_get_dev(dev_id);
> >
> > - if (sess == NULL || xforms == NULL || dev == NULL || mp == NULL)
> > + if (sess == NULL || xforms == NULL || dev == NULL)
> > return -EINVAL;
> >
> > - if (mp->elt_size < sess_priv_sz)
> > + if (sess->priv_sz < sess_priv_sz)
> > return -EINVAL;
> >
> > index = dev->driver_id;
> > @@ -1740,8 +1740,11 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> > >sym_session_configure, -ENOTSUP);
> >
> > if (sess->sess_data[index].refcnt == 0) {
> > + sess->sess_data[index].data = (void *)((uint8_t *)sess +
> > +
> > rte_cryptodev_sym_get_header_session_size() +
> > + (index * sess->priv_sz));
> > ret = dev->dev_ops->sym_session_configure(dev, xforms,
> > - sess, mp);
> > + sess->sess_data[index].data);
> > if (ret < 0) {
> > CDEV_LOG_ERR(
> > "dev_id %d failed to configure session
> > details",
> > @@ -1750,7 +1753,7 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
> > }
> > }
> >
> > - rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms, mp);
> > + rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms);
> > sess->sess_data[index].refcnt++;
> > return 0;
> > }
> > @@ -1795,6 +1798,21 @@ rte_cryptodev_asym_session_init(uint8_t
> dev_id,
> > rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
> > return 0;
> > }
> > +static size_t
> > +get_max_sym_sess_priv_sz(void)
> > +{
> > + size_t max_sz, sz;
> > + int16_t cdev_id, n;
> > +
> > + max_sz = 0;
> > + n = rte_cryptodev_count();
> > + for (cdev_id = 0; cdev_id != n; cdev_id++) {
> > + sz = rte_cryptodev_sym_get_private_session_size(cdev_id);
> > + if (sz > max_sz)
> > + max_sz = sz;
> > + }
> > + return max_sz;
> > +}
> >
> > struct rte_mempool *
> > rte_cryptodev_sym_session_pool_create(const char *name, uint32_t
> > nb_elts,
> > @@ -1804,15 +1822,15 @@
> rte_cryptodev_sym_session_pool_create(const
> > char *name, uint32_t nb_elts,
> > struct rte_mempool *mp;
> > struct rte_cryptodev_sym_session_pool_private_data *pool_priv;
> > uint32_t obj_sz;
> > + uint32_t sess_priv_sz = get_max_sym_sess_priv_sz();
> >
> > obj_sz = rte_cryptodev_sym_get_header_session_size() +
> > user_data_size;
> > - if (obj_sz > elt_size)
> > + if (elt_size < obj_sz + (sess_priv_sz * nb_drivers)) {
> > CDEV_LOG_INFO("elt_size %u is expanded to %u\n",
> > elt_size,
> > - obj_sz);
> > - else
> > - obj_sz = elt_size;
> > -
> > - mp = rte_mempool_create(name, nb_elts, obj_sz, cache_size,
> > + obj_sz + (sess_priv_sz * nb_drivers));
> > + elt_size = obj_sz + (sess_priv_sz * nb_drivers);
> > + }
> > + mp = rte_mempool_create(name, nb_elts, elt_size, cache_size,
> > (uint32_t)(sizeof(*pool_priv)),
> > NULL, NULL, NULL, NULL,
> > socket_id, 0);
> > @@ -1832,6 +1850,7 @@ rte_cryptodev_sym_session_pool_create(const
> > char *name, uint32_t nb_elts,
> >
> > pool_priv->nb_drivers = nb_drivers;
> > pool_priv->user_data_sz = user_data_size;
> > + pool_priv->sess_priv_sz = sess_priv_sz;
> >
> > rte_cryptodev_trace_sym_session_pool_create(name, nb_elts,
> > elt_size, cache_size, user_data_size, mp);
> > @@ -1865,7 +1884,7 @@ rte_cryptodev_sym_is_valid_session_pool(struct
> > rte_mempool *mp)
> > return 1;
> > }
> >
> > -struct rte_cryptodev_sym_session *
> > +void *
> > rte_cryptodev_sym_session_create(struct rte_mempool *mp)
> > {
> > struct rte_cryptodev_sym_session *sess;
> > @@ -1886,6 +1905,7 @@ rte_cryptodev_sym_session_create(struct
> > rte_mempool *mp)
> >
> > sess->nb_drivers = pool_priv->nb_drivers;
> > sess->user_data_sz = pool_priv->user_data_sz;
> > + sess->priv_sz = pool_priv->sess_priv_sz;
> > sess->opaque_data = 0;
> >
> > /* Clear device session pointer.
> > @@ -1895,7 +1915,7 @@ rte_cryptodev_sym_session_create(struct
> > rte_mempool *mp)
> > rte_cryptodev_sym_session_data_size(sess));
> >
> > rte_cryptodev_trace_sym_session_create(mp, sess);
> > - return sess;
> > + return (void *)sess;
> > }
> >
> > struct rte_cryptodev_asym_session *
> > @@ -1933,9 +1953,9 @@ rte_cryptodev_asym_session_create(struct
> > rte_mempool *mp)
> > }
> >
> > int
> > -rte_cryptodev_sym_session_clear(uint8_t dev_id,
> > - struct rte_cryptodev_sym_session *sess)
> > +rte_cryptodev_sym_session_clear(uint8_t dev_id, void *s)
> > {
> > + struct rte_cryptodev_sym_session *sess = s;
> > struct rte_cryptodev *dev;
> > uint8_t driver_id;
> >
> > @@ -1957,7 +1977,7 @@ rte_cryptodev_sym_session_clear(uint8_t
> dev_id,
> >
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->sym_session_clear,
> > -ENOTSUP);
> >
> > - dev->dev_ops->sym_session_clear(dev, sess);
> > + dev->dev_ops->sym_session_clear(dev, sess-
> > >sess_data[driver_id].data);
> >
> > rte_cryptodev_trace_sym_session_clear(dev_id, sess);
> > return 0;
> > @@ -1988,10 +2008,11 @@ rte_cryptodev_asym_session_clear(uint8_t
> > dev_id,
> > }
> >
> > int
> > -rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
> > +rte_cryptodev_sym_session_free(void *s)
> > {
> > uint8_t i;
> > struct rte_mempool *sess_mp;
> > + struct rte_cryptodev_sym_session *sess = s;
> >
> > if (sess == NULL)
> > return -EINVAL;
> > diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> > index bb01f0f195..25af6fa7b9 100644
> > --- a/lib/cryptodev/rte_cryptodev.h
> > +++ b/lib/cryptodev/rte_cryptodev.h
> > @@ -537,8 +537,6 @@ struct rte_cryptodev_qp_conf {
> > uint32_t nb_descriptors; /**< Number of descriptors per queue pair
> > */
> > struct rte_mempool *mp_session;
> > /**< The mempool for creating session in sessionless mode */
> > - struct rte_mempool *mp_session_private;
> > - /**< The mempool for creating sess private data in sessionless mode
> > */
> > };
> >
> > /**
> > @@ -1126,6 +1124,8 @@ struct rte_cryptodev_sym_session {
> > /**< number of elements in sess_data array */
> > uint16_t user_data_sz;
> > /**< session user data will be placed after sess_data */
> > + uint16_t priv_sz;
> > + /**< Maximum private session data size which each driver can use */
> > __extension__ struct {
> > void *data;
> > uint16_t refcnt;
> > @@ -1177,10 +1177,10 @@
> rte_cryptodev_sym_session_pool_create(const
> > char *name, uint32_t nb_elts,
> > * @param mempool Symmetric session mempool to allocate session
> > * objects from
> > * @return
> > - * - On success return pointer to sym-session
> > + * - On success return opaque pointer to sym-session
> > * - On failure returns NULL
> > */
> > -struct rte_cryptodev_sym_session *
> > +void *
> > rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
> >
> > /**
> > @@ -1209,7 +1209,7 @@ rte_cryptodev_asym_session_create(struct
> > rte_mempool *mempool);
> > * - -EBUSY if not all device private data has been freed.
> > */
> > int
> > -rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session
> > *sess);
> > +rte_cryptodev_sym_session_free(void *sess);
> >
> > /**
> > * Frees asymmetric crypto session header, after checking that all
> > @@ -1229,25 +1229,23 @@ rte_cryptodev_asym_session_free(struct
> > rte_cryptodev_asym_session *sess);
> >
> > /**
> > * Fill out private data for the device id, based on its device type.
> > + * Memory for private data is already allocated in sess, driver need
> > + * to fill the content.
> > *
> > * @param dev_id ID of device that we want the session to be used on
> > * @param sess Session where the private data will be attached to
> > * @param xforms Symmetric crypto transform operations to apply on
> > flow
> > * processed with this session
> > - * @param mempool Mempool where the private data is allocated.
> > *
> > * @return
> > * - On success, zero.
> > * - -EINVAL if input parameters are invalid.
> > * - -ENOTSUP if crypto device does not support the crypto transform or
> > * does not support symmetric operations.
> > - * - -ENOMEM if the private session could not be allocated.
> > */
> > int
> > -rte_cryptodev_sym_session_init(uint8_t dev_id,
> > - struct rte_cryptodev_sym_session *sess,
> > - struct rte_crypto_sym_xform *xforms,
> > - struct rte_mempool *mempool);
> > +rte_cryptodev_sym_session_init(uint8_t dev_id, void *sess,
> > + struct rte_crypto_sym_xform *xforms);
> >
> > /**
> > * Initialize asymmetric session on a device with specific asymmetric xform
> > @@ -1286,8 +1284,7 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
> > * - -ENOTSUP if crypto device does not support symmetric operations.
> > */
> > int
> > -rte_cryptodev_sym_session_clear(uint8_t dev_id,
> > - struct rte_cryptodev_sym_session *sess);
> > +rte_cryptodev_sym_session_clear(uint8_t dev_id, void *sess);
> >
> > /**
> > * Frees resources held by asymmetric session during
> > rte_cryptodev_session_init
> > diff --git a/lib/cryptodev/rte_cryptodev_trace.h
> > b/lib/cryptodev/rte_cryptodev_trace.h
> > index d1f4f069a3..44da04c425 100644
> > --- a/lib/cryptodev/rte_cryptodev_trace.h
> > +++ b/lib/cryptodev/rte_cryptodev_trace.h
> > @@ -56,7 +56,6 @@ RTE_TRACE_POINT(
> > rte_trace_point_emit_u16(queue_pair_id);
> > rte_trace_point_emit_u32(conf->nb_descriptors);
> > rte_trace_point_emit_ptr(conf->mp_session);
> > - rte_trace_point_emit_ptr(conf->mp_session_private);
> > )
> >
> > RTE_TRACE_POINT(
> > @@ -106,15 +105,13 @@ RTE_TRACE_POINT(
> > RTE_TRACE_POINT(
> > rte_cryptodev_trace_sym_session_init,
> > RTE_TRACE_POINT_ARGS(uint8_t dev_id,
> > - struct rte_cryptodev_sym_session *sess, void *xforms,
> > - void *mempool),
> > + struct rte_cryptodev_sym_session *sess, void *xforms),
> > rte_trace_point_emit_u8(dev_id);
> > rte_trace_point_emit_ptr(sess);
> > rte_trace_point_emit_u64(sess->opaque_data);
> > rte_trace_point_emit_u16(sess->nb_drivers);
> > rte_trace_point_emit_u16(sess->user_data_sz);
> > rte_trace_point_emit_ptr(xforms);
> > - rte_trace_point_emit_ptr(mempool);
> > )
> >
> > RTE_TRACE_POINT(
> > diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
> > index ad7904c0ee..efdba9c899 100644
> > --- a/lib/pipeline/rte_table_action.c
> > +++ b/lib/pipeline/rte_table_action.c
> > @@ -1719,7 +1719,7 @@ struct sym_crypto_data {
> > uint16_t op_mask;
> >
> > /** Session pointer. */
> > - struct rte_cryptodev_sym_session *session;
> > + void *session;
> >
> > /** Direction of crypto, encrypt or decrypt */
> > uint16_t direction;
> > @@ -1780,7 +1780,7 @@ sym_crypto_apply(struct sym_crypto_data
> *data,
> > const struct rte_crypto_auth_xform *auth_xform = NULL;
> > const struct rte_crypto_aead_xform *aead_xform = NULL;
> > struct rte_crypto_sym_xform *xform = p->xform;
> > - struct rte_cryptodev_sym_session *session;
> > + void *session;
> > int ret;
> >
> > memset(data, 0, sizeof(*data));
> > @@ -1905,7 +1905,7 @@ sym_crypto_apply(struct sym_crypto_data
> *data,
> > return -ENOMEM;
> >
> > ret = rte_cryptodev_sym_session_init(cfg->cryptodev_id, session,
> > - p->xform, cfg->mp_init);
> > + p->xform);
> > if (ret < 0) {
> > rte_cryptodev_sym_session_free(session);
> > return ret;
> > @@ -2858,7 +2858,7 @@ rte_table_action_time_read(struct
> > rte_table_action *action,
> > return 0;
> > }
> >
> > -struct rte_cryptodev_sym_session *
> > +void *
> > rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
> > void *data)
> > {
> > diff --git a/lib/pipeline/rte_table_action.h b/lib/pipeline/rte_table_action.h
> > index 82bc9d9ac9..68db453a8b 100644
> > --- a/lib/pipeline/rte_table_action.h
> > +++ b/lib/pipeline/rte_table_action.h
> > @@ -1129,7 +1129,7 @@ rte_table_action_time_read(struct
> > rte_table_action *action,
> > * The pointer to the session on success, NULL otherwise.
> > */
> > __rte_experimental
> > -struct rte_cryptodev_sym_session *
> > +void *
> > rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
> > void *data);
> >
> > diff --git a/lib/vhost/rte_vhost_crypto.h b/lib/vhost/rte_vhost_crypto.h
> > index f54d731139..d9b7beed9c 100644
> > --- a/lib/vhost/rte_vhost_crypto.h
> > +++ b/lib/vhost/rte_vhost_crypto.h
> > @@ -50,8 +50,6 @@ rte_vhost_crypto_driver_start(const char *path);
> > * multiple Vhost-crypto devices.
> > * @param sess_pool
> > * The pointer to the created cryptodev session pool.
> > - * @param sess_priv_pool
> > - * The pointer to the created cryptodev session private data mempool.
> > * @param socket_id
> > * NUMA Socket ID to allocate resources on. *
> > * @return
> > @@ -61,7 +59,6 @@ rte_vhost_crypto_driver_start(const char *path);
> > int
> > rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
> > struct rte_mempool *sess_pool,
> > - struct rte_mempool *sess_priv_pool,
> > int socket_id);
> >
> > /**
> > diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
> > index 926b5c0bd9..b4464c4253 100644
> > --- a/lib/vhost/vhost_crypto.c
> > +++ b/lib/vhost/vhost_crypto.c
> > @@ -338,7 +338,7 @@ vhost_crypto_create_sess(struct vhost_crypto
> > *vcrypto,
> > VhostUserCryptoSessionParam *sess_param)
> > {
> > struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
> > - struct rte_cryptodev_sym_session *session;
> > + void *session;
> > int ret;
> >
> > switch (sess_param->op_type) {
> > @@ -383,8 +383,7 @@ vhost_crypto_create_sess(struct vhost_crypto
> > *vcrypto,
> > return;
> > }
> >
> > - if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1,
> > - vcrypto->sess_priv_pool) < 0) {
> > + if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1)
> > < 0) {
> > VC_LOG_ERR("Failed to initialize session");
> > sess_param->session_id = -VIRTIO_CRYPTO_ERR;
> > return;
> > @@ -1425,7 +1424,6 @@ rte_vhost_crypto_driver_start(const char *path)
> > int
> > rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
> > struct rte_mempool *sess_pool,
> > - struct rte_mempool *sess_priv_pool,
> > int socket_id)
> > {
> > struct virtio_net *dev = get_device(vid);
> > @@ -1447,7 +1445,6 @@ rte_vhost_crypto_create(int vid, uint8_t
> > cryptodev_id,
> > }
> >
> > vcrypto->sess_pool = sess_pool;
> > - vcrypto->sess_priv_pool = sess_priv_pool;
> > vcrypto->cid = cryptodev_id;
> > vcrypto->cache_session_id = UINT64_MAX;
> > vcrypto->last_session_id = 1;
> > --
> > 2.25.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
2021-10-04 13:55 4% ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
2021-10-04 13:55 6% ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-04 13:55 2% ` Konstantin Ananyev
2021-10-05 13:09 0% ` Thomas Monjalon
2021-10-04 13:56 2% ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
` (3 subsequent siblings)
5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for 'fast' calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++++
lib/ethdev/ethdev_private.h | 7 +++++
lib/ethdev/rte_ethdev.c | 27 +++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 45 +++++++++++++++++++++++++++++++
4 files changed, 131 insertions(+)
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..3eeda6e9f9 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
return str == NULL ? -1 : 0;
}
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+ __rte_unused struct rte_mbuf **rx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+ __rte_unused struct rte_mbuf **tx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+ static const struct rte_eth_fp_ops dummy_ops = {
+ .rx_pkt_burst = dummy_eth_rx_burst,
+ .tx_pkt_burst = dummy_eth_tx_burst,
+ .rxq = {.data = dummy_data, .clbk = dummy_data,},
+ .txq = {.data = dummy_data, .clbk = dummy_data,},
+ };
+
+ *fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+ const struct rte_eth_dev *dev)
+{
+ fpo->rx_pkt_burst = dev->rx_pkt_burst;
+ fpo->tx_pkt_burst = dev->tx_pkt_burst;
+ fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+ fpo->rx_queue_count = dev->rx_queue_count;
+ fpo->rx_descriptor_status = dev->rx_descriptor_status;
+ fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+ fpo->rxq.data = dev->data->rx_queues;
+ fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+ fpo->txq.data = dev->data->tx_queues;
+ fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..40333e7651 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
/* Parse devargs value for representor parameter. */
int rte_eth_devargs_parse_representor_ports(char *str, void *data);
+/* reset eth 'fast' API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth 'fast' API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+ const struct rte_eth_dev *dev);
+
#endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 424bc260fa..036c82cbfb 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
+/* public 'fast' API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
/* spinlock for eth device callbacks */
static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
@@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
rte_eth_dev_callback_process(eth_dev,
RTE_ETH_EVENT_DESTROY, NULL);
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
eth_dev->state = RTE_ETH_DEV_UNUSED;
@@ -1788,6 +1793,9 @@ rte_eth_dev_start(uint16_t port_id)
(*dev->dev_ops->link_update)(dev, 0);
}
+ /* expose selection of PMD rx/tx function */
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
rte_ethdev_trace_start(port_id);
return 0;
}
@@ -1810,6 +1818,9 @@ rte_eth_dev_stop(uint16_t port_id)
return 0;
}
+ /* point rx/tx functions to dummy ones */
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
dev->data->dev_started = 0;
ret = (*dev->dev_ops->dev_stop)(dev);
rte_ethdev_trace_stop(port_id, ret);
@@ -4568,6 +4579,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
}
+RTE_INIT(eth_dev_init_fp_ops)
+{
+ uint32_t i;
+
+ for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
RTE_INIT(eth_dev_init_cb_lists)
{
uint16_t i;
@@ -4736,6 +4755,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
if (dev == NULL)
return;
+ /*
+ * for secondary process, at that point we expect device
+ * to be already 'usable', so shared data and all function pointers
+ * for 'fast' devops have to be setup properly inside rte_eth_dev.
+ */
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
dev->state = RTE_ETH_DEV_ATTACHED;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 948c0b71c1..fe47a660c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
/**< @internal Check the status of a Tx descriptor */
+/**
+ * @internal
+ * Structure used to hold opaque pointernals to internal ethdev RX/TXi
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for 'fast' ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+ void **data;
+ /**< points to array of internal queue data pointers */
+ void **clbk;
+ /**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * 'fast' ethdev funcions and related data are hold in a flat array.
+ * one entry per ethdev.
+ */
+struct rte_eth_fp_ops {
+
+ /** first 64B line */
+ eth_rx_burst_t rx_pkt_burst;
+ /**< PMD receive function. */
+ eth_tx_burst_t tx_pkt_burst;
+ /**< PMD transmit function. */
+ eth_tx_prep_t tx_pkt_prepare;
+ /**< PMD transmit prepare function. */
+ eth_rx_queue_count_t rx_queue_count;
+ /**< Get the number of used RX descriptors. */
+ eth_rx_descriptor_status_t rx_descriptor_status;
+ /**< Check the status of a Rx descriptor. */
+ eth_tx_descriptor_status_t tx_descriptor_status;
+ /**< Check the status of a Tx descriptor. */
+ uintptr_t reserved[2];
+
+ /** second 64B line */
+ struct rte_ethdev_qdata rxq;
+ struct rte_ethdev_qdata txq;
+ uintptr_t reserved2[4];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
/**
* @internal
--
2.26.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
2021-10-04 13:55 4% ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
` (2 preceding siblings ...)
2021-10-04 13:56 2% ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-04 13:56 9% ` Konstantin Ananyev
2021-10-05 10:04 0% ` David Marchand
2021-10-06 16:42 0% ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:56 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 +
drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/net/cxgbe/base/adapter.h | 2 +-
drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
drivers/net/netvsc/hn_var.h | 1 +
lib/ethdev/ethdev_driver.h | 149 ++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 143 -----------------
lib/ethdev/version.map | 2 +-
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/metrics/rte_metrics_telemetry.c | 2 +-
13 files changed, 165 insertions(+), 152 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 6055551443..2944149943 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -228,6 +228,12 @@ ABI Changes
to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
is used by public inline function ``rte_eth_rx_queue_count``.
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+ private data structures. ``rte_eth_devices[]`` can't be accessible directly
+ by user any more. While it is an ABI breakage, this change is intended
+ to be transparent for both users (no changes in user app is required) and
+ PMD developers (no changes in PMD is required).
+
Known Issues
------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
#include <rte_atomic.h>
#include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_spinlock.h>
#include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
#include <cryptodev_pmd.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_event_crypto_adapter.h>
#include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
#include <rte_mbuf.h>
#include <rte_io.h>
#include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include "../cxgbe_compat.h"
#include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
#include <unistd.h>
#include <stdarg.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_log.h>
#include <rte_eth_ctrl.h>
#include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
*/
#include <rte_eal_paging.h>
+#include <ethdev_driver.h>
/*
* Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index cc2c75261c..63b04dce32 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,155 @@
#include <rte_ethdev.h>
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+ struct rte_eth_rxtx_callback *next;
+ union{
+ rte_rx_callback_fn rx;
+ rte_tx_callback_fn tx;
+ } fn;
+ void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+ eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+ eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+ eth_tx_prep_t tx_pkt_prepare;
+ /**< Pointer to PMD transmit prepare function. */
+ eth_rx_queue_count_t rx_queue_count;
+ /**< Get the number of used RX descriptors. */
+ eth_rx_descriptor_status_t rx_descriptor_status;
+ /**< Check the status of a Rx descriptor. */
+ eth_tx_descriptor_status_t tx_descriptor_status;
+ /**< Check the status of a Tx descriptor. */
+
+ /**
+ * Next two fields are per-device data but *data is shared between
+ * primary and secondary processes and *process_private is per-process
+ * private. The second one is managed by PMDs if necessary.
+ */
+ struct rte_eth_dev_data *data; /**< Pointer to device data. */
+ void *process_private; /**< Pointer to per-process device data. */
+ const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+ struct rte_device *device; /**< Backing device */
+ struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+ /** User application callbacks for NIC interrupts */
+ struct rte_eth_dev_cb_list link_intr_cbs;
+ /**
+ * User-supplied functions called from rx_burst to post-process
+ * received packets before passing them to the user
+ */
+ struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ /**
+ * User-supplied functions called from tx_burst to pre-process
+ * received packets before passing them to the driver for transmission.
+ */
+ struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ enum rte_eth_dev_state state; /**< Flag indicating the port state */
+ void *security_ctx; /**< Context for security ops */
+
+ uint64_t reserved_64s[4]; /**< Reserved for future fields */
+ void *reserved_ptrs[4]; /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+ char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+ void **rx_queues; /**< Array of pointers to RX queues. */
+ void **tx_queues; /**< Array of pointers to TX queues. */
+ uint16_t nb_rx_queues; /**< Number of RX queues. */
+ uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+ struct rte_eth_dev_sriov sriov; /**< SRIOV data */
+
+ void *dev_private;
+ /**< PMD-specific private data.
+ * @see rte_eth_dev_release_port()
+ */
+
+ struct rte_eth_link dev_link; /**< Link-level information & status. */
+ struct rte_eth_conf dev_conf; /**< Configuration applied to device. */
+ uint16_t mtu; /**< Maximum Transmission Unit. */
+ uint32_t min_rx_buf_size;
+ /**< Common RX buffer size handled by all queues. */
+
+ uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+ struct rte_ether_addr *mac_addrs;
+ /**< Device Ethernet link address.
+ * @see rte_eth_dev_release_port()
+ */
+ uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+ /**< Bitmap associating MAC addresses to pools. */
+ struct rte_ether_addr *hash_mac_addrs;
+ /**< Device Ethernet MAC addresses of hash filtering.
+ * @see rte_eth_dev_release_port()
+ */
+ uint16_t port_id; /**< Device [external] port identifier. */
+
+ __extension__
+ uint8_t promiscuous : 1,
+ /**< RX promiscuous mode ON(1) / OFF(0). */
+ scattered_rx : 1,
+ /**< RX of scattered packets is ON(1) / OFF(0) */
+ all_multicast : 1,
+ /**< RX all multicast mode ON(1) / OFF(0). */
+ dev_started : 1,
+ /**< Device state: STARTED(1) / STOPPED(0). */
+ lro : 1,
+ /**< RX LRO is ON(1) / OFF(0) */
+ dev_configured : 1;
+ /**< Indicates whether the device is configured.
+ * CONFIGURED(1) / NOT CONFIGURED(0).
+ */
+ uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+ /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+ uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+ /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+ uint32_t dev_flags; /**< Capabilities. */
+ int numa_node; /**< NUMA node connection. */
+ struct rte_vlan_filter_conf vlan_filter_conf;
+ /**< VLAN filter configuration. */
+ struct rte_eth_dev_owner owner; /**< The port owner. */
+ uint16_t representor_id;
+ /**< Switch-specific identifier.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
+
+ pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+ uint64_t reserved_64s[4]; /**< Reserved for future fields */
+ void *reserved_ptrs[4]; /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
/**< @internal Declaration of the hairpin peer queue information structure. */
struct rte_hairpin_peer_info;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 63078e1ef4..2d07db0811 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -95,147 +95,4 @@ struct rte_eth_fp_ops {
extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
- struct rte_eth_rxtx_callback *next;
- union{
- rte_rx_callback_fn rx;
- rte_tx_callback_fn tx;
- } fn;
- void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
- eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
- eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
- eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
- eth_rx_queue_count_t rx_queue_count; /**< Get the number of used RX descriptors. */
- eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
- eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
- /**
- * Next two fields are per-device data but *data is shared between
- * primary and secondary processes and *process_private is per-process
- * private. The second one is managed by PMDs if necessary.
- */
- struct rte_eth_dev_data *data; /**< Pointer to device data. */
- void *process_private; /**< Pointer to per-process device data. */
- const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
- struct rte_device *device; /**< Backing device */
- struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
- /** User application callbacks for NIC interrupts */
- struct rte_eth_dev_cb_list link_intr_cbs;
- /**
- * User-supplied functions called from rx_burst to post-process
- * received packets before passing them to the user
- */
- struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
- /**
- * User-supplied functions called from tx_burst to pre-process
- * received packets before passing them to the driver for transmission.
- */
- struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
- enum rte_eth_dev_state state; /**< Flag indicating the port state */
- void *security_ctx; /**< Context for security ops */
-
- uint64_t reserved_64s[4]; /**< Reserved for future fields */
- void *reserved_ptrs[4]; /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
- char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
- void **rx_queues; /**< Array of pointers to RX queues. */
- void **tx_queues; /**< Array of pointers to TX queues. */
- uint16_t nb_rx_queues; /**< Number of RX queues. */
- uint16_t nb_tx_queues; /**< Number of TX queues. */
-
- struct rte_eth_dev_sriov sriov; /**< SRIOV data */
-
- void *dev_private;
- /**< PMD-specific private data.
- * @see rte_eth_dev_release_port()
- */
-
- struct rte_eth_link dev_link; /**< Link-level information & status. */
- struct rte_eth_conf dev_conf; /**< Configuration applied to device. */
- uint16_t mtu; /**< Maximum Transmission Unit. */
- uint32_t min_rx_buf_size;
- /**< Common RX buffer size handled by all queues. */
-
- uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
- struct rte_ether_addr *mac_addrs;
- /**< Device Ethernet link address.
- * @see rte_eth_dev_release_port()
- */
- uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
- /**< Bitmap associating MAC addresses to pools. */
- struct rte_ether_addr *hash_mac_addrs;
- /**< Device Ethernet MAC addresses of hash filtering.
- * @see rte_eth_dev_release_port()
- */
- uint16_t port_id; /**< Device [external] port identifier. */
-
- __extension__
- uint8_t promiscuous : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
- scattered_rx : 1, /**< RX of scattered packets is ON(1) / OFF(0) */
- all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
- dev_started : 1, /**< Device state: STARTED(1) / STOPPED(0). */
- lro : 1, /**< RX LRO is ON(1) / OFF(0) */
- dev_configured : 1;
- /**< Indicates whether the device is configured.
- * CONFIGURED(1) / NOT CONFIGURED(0).
- */
- uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
- /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
- uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
- /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
- uint32_t dev_flags; /**< Capabilities. */
- int numa_node; /**< NUMA node connection. */
- struct rte_vlan_filter_conf vlan_filter_conf;
- /**< VLAN filter configuration. */
- struct rte_eth_dev_owner owner; /**< The port owner. */
- uint16_t representor_id;
- /**< Switch-specific identifier.
- * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
- */
-
- pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
- uint64_t reserved_64s[4]; /**< Reserved for future fields */
- void *reserved_ptrs[4]; /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
#endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 0881202381..3dc494a016 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -75,7 +75,6 @@ DPDK_22 {
rte_eth_dev_udp_tunnel_port_add;
rte_eth_dev_udp_tunnel_port_delete;
rte_eth_dev_vlan_filter;
- rte_eth_devices;
rte_eth_find_next;
rte_eth_find_next_of;
rte_eth_find_next_owned_by;
@@ -272,6 +271,7 @@ INTERNAL {
rte_eth_dev_release_port;
rte_eth_dev_internal_reset;
rte_eth_devargs_parse;
+ rte_eth_devices;
rte_eth_dma_zone_free;
rte_eth_dma_zone_reserve;
rte_eth_hairpin_queue_peer_bind;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
#include <rte_common.h>
#include <rte_dev.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_log.h>
#include <rte_malloc.h>
#include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
*/
#include <rte_spinlock.h>
#include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include "eventdev_pmd.h"
#include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
#include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
#include <rte_telemetry.h>
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef613..5be21b2e86 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
* Copyright(c) 2020 Intel Corporation
*/
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_string_fns.h>
#ifdef RTE_LIB_TELEMETRY
#include <telemetry_internal.h>
--
2.26.3
^ permalink raw reply [relevance 9%]
* [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
2021-10-04 13:55 4% ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
2021-10-04 13:55 6% ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-04 13:55 2% ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
@ 2021-10-04 13:56 2% ` Konstantin Ananyev
2021-10-05 9:54 0% ` David Marchand
2021-10-04 13:56 9% ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
` (2 subsequent siblings)
5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:56 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Rework 'fast' burst functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/ethdev/ethdev_private.c | 31 +++++
lib/ethdev/rte_ethdev.h | 242 ++++++++++++++++++++++++++----------
lib/ethdev/version.map | 5 +
3 files changed, 210 insertions(+), 68 deletions(-)
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 3eeda6e9f9..27d29b2ac6 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->txq.data = dev->data->tx_queues;
fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
}
+
+uint16_t
+__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+ void *opaque)
+{
+ const struct rte_eth_rxtx_callback *cb = opaque;
+
+ while (cb != NULL) {
+ nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+ nb_pkts, cb->param);
+ cb = cb->next;
+ }
+
+ return nb_rx;
+}
+
+uint16_t
+__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+ const struct rte_eth_rxtx_callback *cb = opaque;
+
+ while (cb != NULL) {
+ nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+ cb->param);
+ cb = cb->next;
+ }
+
+ return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9642b7c00f..7f68be406e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
#include <rte_ethdev_core.h>
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ * The address of an array of pointers to *rte_mbuf* structures that
+ * have been retrieved from the device.
+ * @param nb_pkts
+ * The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ * The number of elements in *rx_pkts* array.
+ * @param opaque
+ * Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ * The number of packets effectively supplied to the *rx_pkts* array.
+ */
+uint16_t __rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+ void *opaque);
+
/**
*
* Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4995,23 +5022,37 @@ static inline uint16_t
rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
{
- struct rte_eth_dev *dev = &rte_eth_devices[port_id];
uint16_t nb_rx;
+ struct rte_eth_fp_ops *p;
+ void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_RX
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
- if (queue_id >= dev->data->nb_rx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
return 0;
}
#endif
- nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
- rx_pkts, nb_pkts);
+
+ nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
#ifdef RTE_ETHDEV_RXTX_CALLBACKS
- struct rte_eth_rxtx_callback *cb;
/* __ATOMIC_RELEASE memory order was used when the
* call back was inserted into the list.
@@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
* cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
* not required.
*/
- cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
- __ATOMIC_RELAXED);
-
- if (unlikely(cb != NULL)) {
- do {
- nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
- nb_pkts, cb->param);
- cb = cb->next;
- } while (cb != NULL);
- }
+ cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+ if (unlikely(cb != NULL))
+ nb_rx = __rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx,
+ nb_pkts, cb);
#endif
rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
static inline int
rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
{
- struct rte_eth_dev *dev;
+ struct rte_eth_fp_ops *p;
+ void *qd;
+
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
- dev = &rte_eth_devices[port_id];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
- if (queue_id >= dev->data->nb_rx_queues ||
- dev->data->rx_queues[queue_id] == NULL)
+ RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+ if (qd == NULL)
return -EINVAL;
- return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+ return (int)(*p->rx_queue_count)(qd);
}
/**
@@ -5133,21 +5179,30 @@ static inline int
rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
uint16_t offset)
{
- struct rte_eth_dev *dev;
- void *rxq;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_RX
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
#endif
- dev = &rte_eth_devices[port_id];
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->rxq.data[queue_id];
+
#ifdef RTE_ETHDEV_DEBUG_RX
- if (queue_id >= dev->data->nb_rx_queues)
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (qd == NULL)
return -ENODEV;
#endif
- RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
- rxq = dev->data->rx_queues[queue_id];
-
- return (*dev->rx_descriptor_status)(rxq, offset);
+ RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+ return (*p->rx_descriptor_status)(qd, offset);
}
/**@{@name Tx hardware descriptor states
@@ -5194,23 +5249,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
uint16_t queue_id, uint16_t offset)
{
- struct rte_eth_dev *dev;
- void *txq;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_TX
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return -EINVAL;
+ }
#endif
- dev = &rte_eth_devices[port_id];
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
+
#ifdef RTE_ETHDEV_DEBUG_TX
- if (queue_id >= dev->data->nb_tx_queues)
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ if (qd == NULL)
return -ENODEV;
#endif
- RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
- txq = dev->data->tx_queues[queue_id];
-
- return (*dev->tx_descriptor_status)(txq, offset);
+ RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+ return (*p->tx_descriptor_status)(qd, offset);
}
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The index of the transmit queue through which output packets must be
+ * sent.
+ * @param tx_pkts
+ * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ * which contain the output packets.
+ * @param nb_pkts
+ * The maximum number of packets to transmit.
+ * @return
+ * The number of output packets to transmit.
+ */
+uint16_t __rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
/**
* Send a burst of output packets on a transmit queue of an Ethernet device.
*
@@ -5281,20 +5367,34 @@ static inline uint16_t
rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct rte_eth_fp_ops *p;
+ void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_TX
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
- if (queue_id >= dev->data->nb_tx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
return 0;
}
#endif
#ifdef RTE_ETHDEV_RXTX_CALLBACKS
- struct rte_eth_rxtx_callback *cb;
/* __ATOMIC_RELEASE memory order was used when the
* call back was inserted into the list.
@@ -5302,21 +5402,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
* cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
* not required.
*/
- cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
- __ATOMIC_RELAXED);
-
- if (unlikely(cb != NULL)) {
- do {
- nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
- cb->param);
- cb = cb->next;
- } while (cb != NULL);
- }
+ cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+ if (unlikely(cb != NULL))
+ nb_pkts = __rte_eth_tx_prolog(port_id, queue_id, tx_pkts,
+ nb_pkts, cb);
#endif
- rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
- nb_pkts);
- return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+ nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+ rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+ return nb_pkts;
}
/**
@@ -5379,31 +5474,42 @@ static inline uint16_t
rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
- struct rte_eth_dev *dev;
+ struct rte_eth_fp_ops *p;
+ void *qd;
#ifdef RTE_ETHDEV_DEBUG_TX
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+ if (port_id >= RTE_MAX_ETHPORTS ||
+ queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid port_id=%u or queue_id=%u\n",
+ port_id, queue_id);
rte_errno = ENODEV;
return 0;
}
#endif
- dev = &rte_eth_devices[port_id];
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[port_id];
+ qd = p->txq.data[queue_id];
#ifdef RTE_ETHDEV_DEBUG_TX
- if (queue_id >= dev->data->nb_tx_queues) {
- RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+ if (!rte_eth_dev_is_valid_port(port_id)) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+ rte_errno = ENODEV;
+ return 0;
+ }
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+ queue_id, port_id);
rte_errno = EINVAL;
return 0;
}
#endif
- if (!dev->tx_pkt_prepare)
+ if (!p->tx_pkt_prepare)
return nb_pkts;
- return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
- tx_pkts, nb_pkts);
+ return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
}
#else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..2348ec3c3c 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -1,6 +1,10 @@
DPDK_22 {
global:
+ # internal functions called by public inline ones
+ __rte_eth_rx_epilog;
+ __rte_eth_tx_prolog;
+
rte_eth_add_first_rx_callback;
rte_eth_add_rx_callback;
rte_eth_add_tx_callback;
@@ -76,6 +80,7 @@ DPDK_22 {
rte_eth_find_next_of;
rte_eth_find_next_owned_by;
rte_eth_find_next_sibling;
+ rte_eth_fp_ops;
rte_eth_iterator_cleanup;
rte_eth_iterator_init;
rte_eth_iterator_next;
--
2.26.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
2021-10-01 17:02 0% ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Ferruh Yigit
@ 2021-10-04 13:55 4% ` Konstantin Ananyev
2021-10-04 13:55 6% ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
` (5 more replies)
2 siblings, 6 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
v4 changes:
- Fix secondary process attach (Pavan)
- Fix build failure (Ferruh)
- Update lib/ethdev/verion.map (Ferruh)
Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
section makes checkpatch.sh to complain.
v3 changes:
- Changes in public struct naming (Jerin/Haiyue)
- Split patches
- Update docs
- Shamelessly included Andrew's patch:
https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
into these series.
I have to do similar thing here, so decided to avoid duplicated effort.
The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.
The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
related data pointer from rte_eth_dev into a separate flat array.
We keep it public to still be able to use inline functions for these
'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
Note that apart from function pointers itself, each element of this
flat array also contains two opaque pointers for each ethdev:
1) a pointer to an array of internal queue data pointers
2) points to array of queue callback data pointers.
Note that exposing this extra information allows us to avoid extra
changes inside PMD level, plus should help to avoid possible
performance degradation.
2. Change implementation of 'fast' inline ethdev functions
(rte_eth_rx_burst(), etc.) to use new public flat array.
While it is an ABI breakage, this change is intended to be transparent
for both users (no changes in user app is required) and PMD developers
(no changes in PMD is required).
One extra note - with new implementation RX/TX callback invocation
will cost one extra function call with this changes. That might cause
some slowdown for code-path with RX/TX callbacks heavily involved.
Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
things into internal header: <ethdev_driver.h>.
That approach was selected to:
- Avoid(/minimize) possible performance losses.
- Minimize required changes inside PMDs.
Performance testing results (ICX 2.0GHz, E810 (ice)):
- testpmd macswap fwd mode, plus
a) no RX/TX callbacks:
no actual slowdown observed
b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
~2% slowdown
- l3fwd: no actual slowdown observed
Would like to thank everyone who already reviewed and tested previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.
Konstantin Ananyev (7):
ethdev: allocate max space for internal queue array
ethdev: change input parameters for rx_queue_count
ethdev: copy ethdev 'fast' API into separate structure
ethdev: make burst functions to use new flat array
ethdev: add API to retrieve multiple ethernet addresses
ethdev: remove legacy Rx descriptor done API
ethdev: hide eth dev related structures
app/test-pmd/config.c | 23 +-
doc/guides/nics/features.rst | 6 +-
doc/guides/rel_notes/deprecation.rst | 5 -
doc/guides/rel_notes/release_21_11.rst | 21 ++
drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/net/ark/ark_ethdev_rx.c | 4 +-
drivers/net/ark/ark_ethdev_rx.h | 3 +-
drivers/net/atlantic/atl_ethdev.h | 2 +-
drivers/net/atlantic/atl_rxtx.c | 9 +-
drivers/net/bnxt/bnxt_ethdev.c | 8 +-
drivers/net/cxgbe/base/adapter.h | 2 +-
drivers/net/dpaa/dpaa_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ptp.c | 2 +-
drivers/net/e1000/e1000_ethdev.h | 10 +-
drivers/net/e1000/em_ethdev.c | 1 -
drivers/net/e1000/em_rxtx.c | 21 +-
drivers/net/e1000/igb_ethdev.c | 2 -
drivers/net/e1000/igb_rxtx.c | 21 +-
drivers/net/enic/enic_ethdev.c | 12 +-
drivers/net/fm10k/fm10k.h | 5 +-
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/fm10k/fm10k_rxtx.c | 29 +-
drivers/net/hns3/hns3_rxtx.c | 7 +-
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_ethdev_vf.c | 1 -
drivers/net/i40e/i40e_rxtx.c | 30 +-
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/iavf/iavf_rxtx.c | 4 +-
drivers/net/iavf/iavf_rxtx.h | 2 +-
drivers/net/ice/ice_rxtx.c | 4 +-
drivers/net/ice/ice_rxtx.h | 2 +-
drivers/net/igc/igc_ethdev.c | 1 -
drivers/net/igc/igc_txrx.c | 23 +-
drivers/net/igc/igc_txrx.h | 5 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 2 -
drivers/net/ixgbe/ixgbe_ethdev.h | 5 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +-
drivers/net/mlx5/mlx5_rx.c | 26 +-
drivers/net/mlx5/mlx5_rx.h | 2 +-
drivers/net/netvsc/hn_rxtx.c | 4 +-
drivers/net/netvsc/hn_var.h | 3 +-
drivers/net/nfp/nfp_rxtx.c | 4 +-
drivers/net/nfp/nfp_rxtx.h | 3 +-
drivers/net/octeontx2/otx2_ethdev.c | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 3 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 20 +-
drivers/net/sfc/sfc_ethdev.c | 29 +-
drivers/net/thunderx/nicvf_ethdev.c | 3 +-
drivers/net/thunderx/nicvf_rxtx.c | 4 +-
drivers/net/thunderx/nicvf_rxtx.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.h | 3 +-
drivers/net/txgbe/txgbe_rxtx.c | 4 +-
drivers/net/vhost/rte_eth_vhost.c | 4 +-
drivers/net/virtio/virtio_ethdev.c | 1 -
lib/ethdev/ethdev_driver.h | 149 +++++++++
lib/ethdev/ethdev_private.c | 83 +++++
lib/ethdev/ethdev_private.h | 7 +
lib/ethdev/rte_ethdev.c | 89 ++++--
lib/ethdev/rte_ethdev.h | 286 ++++++++++++------
lib/ethdev/rte_ethdev_core.h | 171 +++--------
lib/ethdev/version.map | 10 +-
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/metrics/rte_metrics_telemetry.c | 2 +-
68 files changed, 673 insertions(+), 570 deletions(-)
--
2.26.3
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count
2021-10-04 13:55 4% ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
@ 2021-10-04 13:55 6% ` Konstantin Ananyev
2021-10-04 13:55 2% ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
` (4 subsequent siblings)
5 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
To: dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
Konstantin Ananyev
Currently majority of 'fast' ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all 'fast' ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 ++++++
drivers/net/ark/ark_ethdev_rx.c | 4 ++--
drivers/net/ark/ark_ethdev_rx.h | 3 +--
drivers/net/atlantic/atl_ethdev.h | 2 +-
drivers/net/atlantic/atl_rxtx.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 8 +++++---
drivers/net/dpaa/dpaa_ethdev.c | 9 ++++-----
drivers/net/dpaa2/dpaa2_ethdev.c | 9 ++++-----
drivers/net/e1000/e1000_ethdev.h | 6 ++----
drivers/net/e1000/em_rxtx.c | 4 ++--
drivers/net/e1000/igb_rxtx.c | 4 ++--
drivers/net/enic/enic_ethdev.c | 12 ++++++------
drivers/net/fm10k/fm10k.h | 2 +-
drivers/net/fm10k/fm10k_rxtx.c | 4 ++--
drivers/net/hns3/hns3_rxtx.c | 7 +++++--
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/i40e/i40e_rxtx.c | 4 ++--
drivers/net/i40e/i40e_rxtx.h | 3 +--
drivers/net/iavf/iavf_rxtx.c | 4 ++--
drivers/net/iavf/iavf_rxtx.h | 2 +-
drivers/net/ice/ice_rxtx.c | 4 ++--
drivers/net/ice/ice_rxtx.h | 2 +-
drivers/net/igc/igc_txrx.c | 5 ++---
drivers/net/igc/igc_txrx.h | 3 +--
drivers/net/ixgbe/ixgbe_ethdev.h | 3 +--
drivers/net/ixgbe/ixgbe_rxtx.c | 4 ++--
drivers/net/mlx5/mlx5_rx.c | 26 ++++++++++++-------------
drivers/net/mlx5/mlx5_rx.h | 2 +-
drivers/net/netvsc/hn_rxtx.c | 4 ++--
drivers/net/netvsc/hn_var.h | 2 +-
drivers/net/nfp/nfp_rxtx.c | 4 ++--
drivers/net/nfp/nfp_rxtx.h | 3 +--
drivers/net/octeontx2/otx2_ethdev.h | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 8 ++++----
drivers/net/sfc/sfc_ethdev.c | 12 ++++++------
drivers/net/thunderx/nicvf_ethdev.c | 3 +--
drivers/net/thunderx/nicvf_rxtx.c | 4 ++--
drivers/net/thunderx/nicvf_rxtx.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.h | 3 +--
drivers/net/txgbe/txgbe_rxtx.c | 4 ++--
drivers/net/vhost/rte_eth_vhost.c | 4 ++--
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_core.h | 3 +--
43 files changed, 103 insertions(+), 110 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 37dc1a7786..fd80538b6c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -213,6 +213,12 @@ ABI Changes
``rte_security_ipsec_xform`` to allow applications to configure SA soft
and hard expiry limits. Limits can be either in number of packets or bytes.
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+ Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+ to internal queue data as input parameter. While this change is transparent
+ to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+ is used by public inline function ``rte_eth_rx_queue_count``.
+
Known Issues
------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
}
uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
{
struct ark_rx_queue *queue;
- queue = dev->data->rx_queues[queue_id];
+ queue = rx_queue;
return (queue->prod_index - queue->cons_index); /* mod arith */
}
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
/* Return Rx queue avail count */
uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
{
struct atl_rx_queue *rxq;
PMD_INIT_FUNC_TRACE();
- if (rx_queue_id >= dev->data->nb_rx_queues) {
- PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
- return 0;
- }
-
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
if (rxq == NULL)
return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 097dd10de9..e07242e961 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3130,20 +3130,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
}
static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
{
- struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+ struct bnxt *bp;
struct bnxt_cp_ring_info *cpr;
uint32_t desc = 0, raw_cons, cp_ring_size;
struct bnxt_rx_queue *rxq;
struct rx_pkt_cmpl *rxcmp;
int rc;
+ rxq = rx_queue;
+ bp = rxq->bp;
+
rc = is_bnxt_in_error(bp);
if (rc)
return rc;
- rxq = dev->data->rx_queues[rx_queue_id];
cpr = rxq->cp_ring;
raw_cons = cpr->cp_raw_cons;
cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
}
static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
- struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+ struct qman_fq *rxq = rx_queue;
u32 frm_cnt = 0;
PMD_INIT_FUNC_TRACE();
if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
- DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
- rx_queue_id, frm_cnt);
+ DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+ rx_queue, frm_cnt);
}
return frm_cnt;
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..b295af2a57 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1011,10 +1011,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
}
static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
{
int32_t ret;
- struct dpaa2_dev_priv *priv = dev->data->dev_private;
struct dpaa2_queue *dpaa2_q;
struct qbman_swp *swp;
struct qbman_fq_query_np_rslt state;
@@ -1031,12 +1030,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
swp = DPAA2_PER_LCORE_PORTAL;
- dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+ dpaa2_q = rx_queue;
if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
frame_cnt = qbman_fq_state_frame_count(&state);
- DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
- rx_queue_id, frame_cnt);
+ DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+ rx_queue, frame_cnt);
}
return frame_cnt;
}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6..460e130a83 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
@@ -476,8 +475,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..40de36cb20 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
{
#define EM_RXQ_SCAN_INTERVAL 4
volatile struct e1000_rx_desc *rxdp;
struct em_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..3210a0e008 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
{
#define IGB_RXQ_SCAN_INTERVAL 4
volatile union e1000_adv_rx_desc *rxdp;
struct igb_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
enic_free_rq(rxq);
}
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
{
- struct enic *enic = pmd_priv(dev);
+ struct enic *enic;
+ struct vnic_rq *sop_rq;
uint32_t queue_count = 0;
struct vnic_cq *cq;
uint32_t cq_tail;
uint16_t cq_idx;
- int rq_num;
- rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
- cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+ sop_rq = rx_queue;
+ enic = vnic_dev_priv(sop_rq->vdev);
+ cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
cq_idx = cq->to_clean;
cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc..648d12a1b4 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
int
fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..eab798e52c 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
{
#define FM10K_RXQ_SCAN_INTERVAL 4
volatile union fm10k_rx_desc *rxdp;
struct fm10k_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->hw_ring[rxq->next_dd];
while ((desc < rxq->nb_desc) &&
rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
}
uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
{
/*
* Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
*/
uint32_t driver_hold_bd_num;
struct hns3_rx_queue *rxq;
+ const struct rte_eth_dev *dev;
uint32_t fbd_num;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
+ dev = &rte_eth_devices[rxq->port_id];
+
fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
struct rte_mempool *mp);
int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..5493ae6bba 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2117,14 +2117,14 @@ i40e_dev_rx_queue_release(void *rxq)
}
uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
{
#define I40E_RXQ_SCAN_INTERVAL 4
volatile union i40e_rx_desc *rxdp;
struct i40e_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..a08b80f020 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..3dc1f04380 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
/* Get the number of used descriptors of a rx queue */
uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
{
#define IAVF_RXQ_SCAN_INTERVAL 4
volatile union iavf_rx_desc *rxdp;
struct iavf_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..61936b0ab1 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1427,14 +1427,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
}
uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
{
#define ICE_RXQ_SCAN_INTERVAL 4
volatile union ice_rx_flex_desc *rxdp;
struct ice_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index b10db0874d..b45abec91a 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -222,7 +222,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
void ice_set_tx_function_flag(struct rte_eth_dev *dev,
struct ice_tx_queue *txq);
void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..437992ecdf 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
igc_rx_queue_release(rxq);
}
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
{
/**
* Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
struct igc_rx_queue *rxq;
uint16_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index f2b2d75bbc..b0c4b3ebd9 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..c5027be1dc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..1f802851e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
{
#define IXGBE_RXQ_SCAN_INTERVAL 4
volatile union ixgbe_adv_rx_desc *rxdp;
struct ixgbe_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &(rxq->rx_ring[rxq->rx_tail]);
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
/**
* DPDK callback to get the number of used descriptors in a RX queue.
*
- * @param dev
- * Pointer to the device structure.
- *
- * @param rx_queue_id
- * The Rx queue.
+ * @param rx_queue
+ * The Rx queue pointer.
*
* @return
* The number of used rx descriptor.
* -EINVAL if the queue is invalid
*/
uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rxq_data *rxq;
+ struct mlx5_rxq_data *rxq = rx_queue;
+ struct rte_eth_dev *dev;
+
+ if (!rxq) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+
+ dev = &rte_eth_devices[rxq->port_id];
if (dev->rx_pkt_burst == NULL ||
dev->rx_pkt_burst == removed_rx_burst) {
rte_errno = ENOTSUP;
return -rte_errno;
}
- rxq = (*priv->rxqs)[rx_queue_id];
- if (!rxq) {
- rte_errno = EINVAL;
- return -rte_errno;
- }
+
return rx_queue_count(rxq);
}
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
* For this device that means how many packets are pending in the ring.
*/
uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
{
- struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+ struct hn_rx_queue *rxq = rx_queue;
return rte_ring_count(rxq->rx_ring);
}
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
void hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
int hn_dev_rx_queue_status(void *rxq, uint16_t offset);
void hn_dev_free_queues(struct rte_eth_dev *dev);
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
}
uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
{
struct nfp_net_rxq *rxq;
struct nfp_net_rx_desc *rxds;
uint32_t idx;
uint32_t count;
- rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+ rxq = rx_queue;
idx = rxq->rd_p;
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
} __rte_aligned(64);
int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30b..6696db6f6f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d..e6f8e5bfc1 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
}
uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_rxq *rxq = rx_queue;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
uint32_t head, tail;
- nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+ nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
return (tail - head) % rxq->qlen;
}
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..4b5713f3ec 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
* use any process-local pointers from the adapter data.
*/
static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
{
- const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
- struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
- sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+ struct sfc_dp_rxq *dp_rxq = rx_queue;
+ const struct sfc_dp_rx *dp_rx;
struct sfc_rxq_info *rxq_info;
- rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+ dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+ rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
return 0;
- return sap->dp_rx->qdesc_npending(rxq_info->dp);
+ return dp_rx->qdesc_npending(dp_rxq);
}
/*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
if (dev->rx_pkt_burst == NULL)
return;
- while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
- nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+ while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
NICVF_MAX_RX_FREE_THRESH);
PMD_DRV_LOG(INFO, "nb_pkts=%d rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
}
uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
{
struct nicvf_rxq *rxq;
- rxq = dev->data->rx_queues[queue_idx];
+ rxq = rx_queue;
return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
*(uint64_t *)(&pkt->rearm_data) = init.value;
}
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
}
uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
{
#define TXGBE_RXQ_SCAN_INTERVAL 4
volatile struct txgbe_rx_desc *rxdp;
struct txgbe_rx_queue *rxq;
uint32_t desc = 0;
- rxq = dev->data->rx_queues[rx_queue_id];
+ rxq = rx_queue;
rxdp = &rxq->rx_ring[rxq->rx_tail];
while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
}
static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
{
struct vhost_queue *vq;
- vq = dev->data->rx_queues[rx_queue_id];
+ vq = rx_queue;
if (vq == NULL)
return 0;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674..9642b7c00f 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
dev->data->rx_queues[queue_id] == NULL)
return -EINVAL;
- return (int)(*dev->rx_queue_count)(dev, queue_id);
+ return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
}
/**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..948c0b71c1 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
/**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
- uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
/**< @internal Get number of used descriptors on a receive queue. */
typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
--
2.26.3
^ permalink raw reply [relevance 6%]
* [dpdk-dev] [PATCH v2] ci: update machine meson option to platform
2021-10-04 12:55 4% [dpdk-dev] [PATCH v1] ci: update machine meson option to platform Juraj Linkeš
@ 2021-10-04 13:29 4% ` Juraj Linkeš
2021-10-11 13:40 4% ` [dpdk-dev] [PATCH v3] " Juraj Linkeš
0 siblings, 1 reply; 200+ results
From: Juraj Linkeš @ 2021-10-04 13:29 UTC (permalink / raw)
To: thomas, david.marchand, aconole, maicolgabriel; +Cc: dev, Juraj Linkeš
The way we're building DPDK in CI, with -Dmachine=default, has not been
updated when the option got replaced to preserve a backwards-complatible
build call to facilitate ABI verification between DPDK versions. Update
the call to use -Dplatform=generic, which is the most up to date way to
execute the same build which is now present in all DPDK versions the ABI
check verifies.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
.ci/linux-build.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 91e43a975b..06aaa79100 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -77,7 +77,7 @@ else
OPTS="$OPTS -Dexamples=all"
fi
-OPTS="$OPTS -Dmachine=default"
+OPTS="$OPTS -Dplatform=generic"
OPTS="$OPTS --default-library=$DEF_LIB"
OPTS="$OPTS --buildtype=debugoptimized"
OPTS="$OPTS -Dcheck_includes=true"
--
2.20.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v1] ci: update machine meson option to platform
@ 2021-10-04 12:55 4% Juraj Linkeš
2021-10-04 13:29 4% ` [dpdk-dev] [PATCH v2] " Juraj Linkeš
0 siblings, 1 reply; 200+ results
From: Juraj Linkeš @ 2021-10-04 12:55 UTC (permalink / raw)
To: thomas, david.marchand, aconole, maicolgabriel; +Cc: dev, Juraj Linkeš
The way we're building DPDK in CI, with -Dmachine=default, has not been
updated when the option got replaced to preserve a backwards-complatible
build call to facilitate ABI verification between DPDK versions. Update
the call to use -Dplatform=generic, which is the most up to date way to
execute the same build which is now present in all DPDK versions the ABI
check verifies.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
.ci/linux-build.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 91e43a975b..f8710e3ad4 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -77,7 +77,7 @@ else
OPTS="$OPTS -Dexamples=all"
fi
-OPTS="$OPTS -Dmachine=default"
+OPTS="$OPTS -platform=generic"
OPTS="$OPTS --default-library=$DEF_LIB"
OPTS="$OPTS --buildtype=debugoptimized"
OPTS="$OPTS -Dcheck_includes=true"
--
2.20.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4] net: introduce IPv4 ihl and version fields
@ 2021-10-04 12:13 4% ` Gregory Etelson
2021-10-12 12:29 4% ` [dpdk-dev] [PATCH v5] " Gregory Etelson
1 sibling, 0 replies; 200+ results
From: Gregory Etelson @ 2021-10-04 12:13 UTC (permalink / raw)
To: dev, getelson; +Cc: matan, rasland, olivier.matz, thomas, Bernard Iremonger
RTE IPv4 header definition combines the `version' and `ihl' fields
into a single structure member.
This patch introduces dedicated structure members for both `version'
and `ihl' IPv4 fields. Separated header fields definitions allow to
create simplified code to match on the IHL value in a flow rule.
The original `version_ihl' structure member is kept for backward
compatibility.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Depends-on: f7383e7c7ec1 ("net: announce changes in IPv4 header access")
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v2: Add dependency.
v3: Add comments.
v4: Update release notes.
---
app/test/test_flow_classify.c | 8 ++++----
doc/guides/rel_notes/release_21_11.rst | 3 +++
lib/net/rte_ip.h | 16 +++++++++++++++-
3 files changed, 22 insertions(+), 5 deletions(-)
diff --git a/app/test/test_flow_classify.c b/app/test/test_flow_classify.c
index 951606f248..4f64be5357 100644
--- a/app/test/test_flow_classify.c
+++ b/app/test/test_flow_classify.c
@@ -95,7 +95,7 @@ static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = {
* dst mask 255.255.255.00 / udp src is 32 dst is 33 / end"
*/
static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = {
- { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0,
+ { { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_UDP, 0,
RTE_IPV4(2, 2, 2, 3), RTE_IPV4(2, 2, 2, 7)}
};
static const struct rte_flow_item_ipv4 ipv4_mask_24 = {
@@ -131,7 +131,7 @@ static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END,
* dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end"
*/
static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = {
- { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0,
+ { { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_TCP, 0,
RTE_IPV4(1, 2, 3, 4), RTE_IPV4(5, 6, 7, 8)}
};
@@ -150,8 +150,8 @@ static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP,
* dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end"
*/
static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = {
- { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, RTE_IPV4(11, 12, 13, 14),
- RTE_IPV4(15, 16, 17, 18)}
+ { { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0,
+ RTE_IPV4(11, 12, 13, 14), RTE_IPV4(15, 16, 17, 18)}
};
static struct rte_flow_item_sctp sctp_spec_1 = {
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73e377a007..deab44a92a 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -170,6 +170,9 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* net: Add ``version`` and ``ihl`` bit-fields to ``struct rte_ipv4_hdr``.
+ Existing ``version_ihl`` field was kept for backward compatibility.
+
ABI Changes
-----------
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index 05948b69b7..89a68d9433 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -38,7 +38,21 @@ extern "C" {
* IPv4 Header
*/
struct rte_ipv4_hdr {
- uint8_t version_ihl; /**< version and header length */
+ __extension__
+ union {
+ uint8_t version_ihl; /**< version and header length */
+ struct {
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+ uint8_t ihl:4; /**< header length */
+ uint8_t version:4; /**< version */
+#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ uint8_t version:4; /**< version */
+ uint8_t ihl:4; /**< header length */
+#else
+#error "setup endian definition"
+#endif
+ };
+ };
uint8_t type_of_service; /**< type of service */
rte_be16_t total_length; /**< length of packet */
rte_be16_t packet_id; /**< packet ID */
--
2.33.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
2021-10-04 10:13 3% ` Ferruh Yigit
@ 2021-10-04 11:17 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-04 11:17 UTC (permalink / raw)
To: Yigit, Ferruh, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay
>
> On 10/4/2021 10:20 AM, Ananyev, Konstantin wrote:
> >
> >>>>
> >>>>> static inline int
> >>>>> rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >>>>> {
> >>>>> - struct rte_eth_dev *dev;
> >>>>> + struct rte_eth_fp_ops *p;
> >>>>> + void *qd;
> >>>>> +
> >>>>> + if (port_id >= RTE_MAX_ETHPORTS ||
> >>>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> >>>>> + RTE_ETHDEV_LOG(ERR,
> >>>>> + "Invalid port_id=%u or queue_id=%u\n",
> >>>>> + port_id, queue_id);
> >>>>> + return -EINVAL;
> >>>>> + }
> >>>>
> >>>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
> >>>
> >>> Original rte_eth_rx_queue_count() always have similar checks enabled,
> >>> that's why I also kept them 'always on'.
> >>>
> >>>>
> >>>> <...>
> >>>>
> >>>>> +++ b/lib/ethdev/version.map
> >>>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
> >>>>> rte_mtr_meter_policy_delete;
> >>>>> rte_mtr_meter_policy_update;
> >>>>> rte_mtr_meter_policy_validate;
> >>>>> +
> >>>>> + # added in 21.05
> >>>>
> >>>> s/21.05/21.11/
> >>>>
> >>>>> + __rte_eth_rx_epilog;
> >>>>> + __rte_eth_tx_prolog;
> >>>>
> >>>> These are directly called by application and must be part of ABI, but marked as
> >>>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> >>>> What about making them proper, non-internal, API?
> >>>
> >>> Hmm not sure what do you suggest here.
> >>> We don't want users to call them explicitly.
> >>> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> >>> So I did what I thought is our usual policy for such semi-internal thigns:
> >>> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> >>> section.
> >>>
> >>> What do you think it should be instead?
> >>>
> >>
> >> Make them public API. (Basically just remove '__' prefix and @internal comment).
> >>
> >> This way application can use them to run custom callback(s) (not only the
> >> registered ones), not sure if this can be dangerous though.
> >
> > Hmm, as I said above, I don't want users to call them explicitly.
> > Do you have any good reason to allow it?
> >
>
> Just to get rid of this internal APIs that is exposed to application state.
>
> >>
> >> We need to trace the ABI for these functions, making them public clarifies it.
> >
> > We do have plenty of semi-internal functions right now,
> > why adding that one will be a problem?
>
> As far as I remember existing ones are 'static inline' functions, and we don't
> have an ABI concern with them. But these are actual functions called by application.
Not always.
As an example of internal but not static ones:
rte_mempool_check_cookies
rte_mempool_contig_blocks_check_cookies
rte_mempool_op_calc_mem_size_helper
_rte_pktmbuf_read
>
> > From other side - if we'll declare it public, we will have obligations to support it
> > in future releases, plus it might encourage users to use it on its own.
> > To me that sounds like extra headache without any gain in return.
> >
>
> If having those two as public API doesn't make sense, I agree with you.
>
> >> Also comment can be updated to describe intended usage instead of marking them
> >> internal, and applications can use these anyway if we mark them internal or not.
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-03 18:05 3% ` Dmitry Kozlyuk
@ 2021-10-04 10:37 0% ` Harman Kalra
0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-10-04 10:37 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Ray Kinsella, David Marchand
Hi Dmitry,
Thanks for reviewing the series.
Please find my comments inline.
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Sunday, October 3, 2021 11:35 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get
> set APIs
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-09-03 18:10 (UTC+0530), Harman Kalra:
> > [...]
> > diff --git a/lib/eal/common/eal_common_interrupts.c
> > b/lib/eal/common/eal_common_interrupts.c
> > new file mode 100644
> > index 0000000000..2e4fed96f0
> > --- /dev/null
> > +++ b/lib/eal/common/eal_common_interrupts.c
> > @@ -0,0 +1,506 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2021 Marvell.
> > + */
> > +
> > +#include <stdlib.h>
> > +#include <string.h>
> > +
> > +#include <rte_errno.h>
> > +#include <rte_log.h>
> > +#include <rte_malloc.h>
> > +
> > +#include <rte_interrupts.h>
> > +
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > + bool from_hugepage)
>
> Since the purpose of the series is to reduce future ABI breakages, how about
> making the second parameter "flags" to have some spare bits?
> (If not removing it completely per David's suggestion.)
>
<HK> Having second parameter "flags" is a good suggestion, I will include it.
> > +{
> > + struct rte_intr_handle *intr_handle;
> > + int i;
> > +
> > + if (from_hugepage)
> > + intr_handle = rte_zmalloc(NULL,
> > + size * sizeof(struct rte_intr_handle),
> > + 0);
> > + else
> > + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
> > + if (!intr_handle) {
> > + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + for (i = 0; i < size; i++) {
> > + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> > + intr_handle[i].alloc_from_hugepage = from_hugepage;
> > + }
> > +
> > + return intr_handle;
> > +}
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> > + struct rte_intr_handle *intr_handle, int
> index)
>
> If rte_intr_handle_instance_alloc() returns a pointer to an array, this function
> is useless since the user can simply manipulate a pointer.
<HK> User wont be able to manipulate the pointer as he is not aware of size of struct rte_intr_handle.
He will observe "dereferencing pointer to incomplete type" compilation error.
> If we want to make a distinction between a single struct rte_intr_handle and
> a commonly allocated bunch of such (but why?), then they should be
> represented by distinct types.
<HK> Do you mean, we should have separate APIs for single allocation and batch allocation? As get API
will be useful only in case of batch allocation. Currently interrupt autotests and ifpga_rawdev driver makes
batch allocation.
I think common API for single and batch is fine, get API is required for returning a particular intr_handle instance.
But one problem I see in current implementation is there should be upper limit check for index in get/set
API, which I will fix.
>
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOMEM;
>
> Why it's sometimes ENOMEM and sometimes ENOTSUP when the handle is
> not allocated?
<HK> I will fix and make it symmetrical across.
>
> > + return NULL;
> > + }
> > +
> > + return &intr_handle[index];
> > +}
> > +
> > +int rte_intr_handle_instance_index_set(struct rte_intr_handle
> *intr_handle,
> > + const struct rte_intr_handle *src,
> > + int index)
>
> See above regarding the "index" parameter. If it can be removed, a better
> name for this function would be rte_intr_handle_copy().
<HK> I think get API is required.
>
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (src == NULL) {
> > + RTE_LOG(ERR, EAL, "Source interrupt instance
> unallocated\n");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + if (index < 0) {
> > + RTE_LOG(ERR, EAL, "Index cany be negative");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
>
> How about making this parameter "size_t"?
<HK> You mean index ? It can be size_t.
>
> > +
> > + intr_handle[index].fd = src->fd;
> > + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> > + intr_handle[index].type = src->type;
> > + intr_handle[index].max_intr = src->max_intr;
> > + intr_handle[index].nb_efd = src->nb_efd;
> > + intr_handle[index].efd_counter_size = src->efd_counter_size;
> > +
> > + memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> > + memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
>
> Should be (-rte_errno) per documentation.
> Please check all functions in this file that return an "int" status.
<HK> Sure will fix it across APIs.
>
> > [...]
> > +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->vfio_dev_fd;
> > +fail:
> > + return rte_errno;
> > +}
>
> Returning a errno value instead of an FD is very error-prone.
> Probably returning (-1) is both safe and convenient?
<HK> Ack
>
> > +
> > +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> > + int max_intr)
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (max_intr > intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Max_intr=%d greater than
> > +PLT_MAX_RXTX_INTR_VEC_ID=%d",
>
> Seems like this common/cnxk name leaked here by mistake?
<HK> Thanks for catching this.
>
> > + max_intr, intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->max_intr = max_intr;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_max_intr_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->max_intr;
> > +fail:
> > + return rte_errno;
> > +}
>
> Should be negative per documentation and to avoid returning a positive
> value that cannot be distinguished from a successful return.
> Please also check other functions in this file returning an "int" result (not
> status).
<HK> Will fix it.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
2021-10-04 9:20 0% ` Ananyev, Konstantin
@ 2021-10-04 10:13 3% ` Ferruh Yigit
2021-10-04 11:17 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-04 10:13 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay
On 10/4/2021 10:20 AM, Ananyev, Konstantin wrote:
>
>>>>
>>>>> static inline int
>>>>> rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>>>>> {
>>>>> - struct rte_eth_dev *dev;
>>>>> + struct rte_eth_fp_ops *p;
>>>>> + void *qd;
>>>>> +
>>>>> + if (port_id >= RTE_MAX_ETHPORTS ||
>>>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>>>> + RTE_ETHDEV_LOG(ERR,
>>>>> + "Invalid port_id=%u or queue_id=%u\n",
>>>>> + port_id, queue_id);
>>>>> + return -EINVAL;
>>>>> + }
>>>>
>>>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
>>>
>>> Original rte_eth_rx_queue_count() always have similar checks enabled,
>>> that's why I also kept them 'always on'.
>>>
>>>>
>>>> <...>
>>>>
>>>>> +++ b/lib/ethdev/version.map
>>>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
>>>>> rte_mtr_meter_policy_delete;
>>>>> rte_mtr_meter_policy_update;
>>>>> rte_mtr_meter_policy_validate;
>>>>> +
>>>>> + # added in 21.05
>>>>
>>>> s/21.05/21.11/
>>>>
>>>>> + __rte_eth_rx_epilog;
>>>>> + __rte_eth_tx_prolog;
>>>>
>>>> These are directly called by application and must be part of ABI, but marked as
>>>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
>>>> What about making them proper, non-internal, API?
>>>
>>> Hmm not sure what do you suggest here.
>>> We don't want users to call them explicitly.
>>> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
>>> So I did what I thought is our usual policy for such semi-internal thigns:
>>> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
>>> section.
>>>
>>> What do you think it should be instead?
>>>
>>
>> Make them public API. (Basically just remove '__' prefix and @internal comment).
>>
>> This way application can use them to run custom callback(s) (not only the
>> registered ones), not sure if this can be dangerous though.
>
> Hmm, as I said above, I don't want users to call them explicitly.
> Do you have any good reason to allow it?
>
Just to get rid of this internal APIs that is exposed to application state.
>>
>> We need to trace the ABI for these functions, making them public clarifies it.
>
> We do have plenty of semi-internal functions right now,
> why adding that one will be a problem?
As far as I remember existing ones are 'static inline' functions, and we don't
have an ABI concern with them. But these are actual functions called by application.
> From other side - if we'll declare it public, we will have obligations to support it
> in future releases, plus it might encourage users to use it on its own.
> To me that sounds like extra headache without any gain in return.
>
If having those two as public API doesn't make sense, I agree with you.
>> Also comment can be updated to describe intended usage instead of marking them
>> internal, and applications can use these anyway if we mark them internal or not.
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 1/3] ethdev: update modify field flow action
2021-10-01 19:52 9% ` [dpdk-dev] [PATCH 1/3] " Viacheslav Ovsiienko
@ 2021-10-04 9:38 0% ` Ori Kam
0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2021-10-04 9:38 UTC (permalink / raw)
To: Slava Ovsiienko, dev
Cc: Raslan Darawsheh, Matan Azrad, Shahaf Shuler, Gregory Etelson,
NBU-Contact-Thomas Monjalon
Hi Slava,
> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Friday, October 1, 2021 10:52 PM
> Subject: [PATCH 1/3] ethdev: update modify field flow action
>
> The generic modify field flow action introduced in [1] has some issues related
> to the immediate source operand:
>
> - immediate source can be presented either as an unsigned
> 64-bit integer or pointer to data pattern in memory.
> There was no explicit pointer field defined in the union
>
> - the byte ordering for 64-bit integer was not specified.
> Many fields have lesser lengths and byte ordering
> is crucial.
>
> - how the bit offset is applied to the immediate source
> field was not defined and documented
>
> - 64-bit integer size is not enough to provide MAC and
I think for mac it is enough.
> IPv6 addresses
>
> In order to cover the issues and exclude any ambiguities the following is
> done:
>
> - introduce the explicit pointer field
> in rte_flow_action_modify_data structure
>
> - replace the 64-bit unsigned integer with 16-byte array
>
> - update the modify field flow action documentation
>
> [1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 8 ++++++++
> doc/guides/rel_notes/release_21_11.rst | 7 +++++++
> lib/ethdev/rte_flow.h | 15 ++++++++++++---
> 3 files changed, 27 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> index 2b42d5ec8c..a54760a7b4 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -2835,6 +2835,14 @@ a packet to any other part of it.
> ``value`` sets an immediate value to be used as a source or points to a
> location of the value in memory. It is used instead of ``level`` and ``offset``
> for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER``
> respectively.
> +The data in memory should be presented exactly in the same byte order
> +and length as in the relevant flow item, i.e. data for field with type
> +RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field in
> +rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
> +rte_flow_item_ipv6 conventions, and so on. The bitfield exatracted from
> +the memory being applied as second operation parameter is defined by
> +width and the destination field offset. If the field size is large than
> +16 bytes the pattern can be provided as pointer only.
>
You should specify where is the offset of the src is taken from.
Per your example if the application wants to change the 2 byte of source mac
it should giveas an imidate value 6 bytes, with the second byte as the new value to set
so from where do it takes the offset? Since offset is not valid in case of immediate value.
I assume it is based on the offset of the destination.
> .. _table_rte_flow_action_modify_field:
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 73e377a007..7db6cccab0 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -170,6 +170,10 @@ API Changes
> the crypto/security operation. This field will be used to communicate
> events such as soft expiry with IPsec in lookaside mode.
>
> +* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate
> +data
> + array is extended, data pointer field is explicitly added to union,
> +the
> + action behavior is defined in more strict fashion and documentation
> uddated.
> +
Uddated ->updated?
I think it is important to document here that the behavior has changed,
from seting only the relevant value to update to setting all the field and
the mask is done internally.
>
> ABI Changes
> -----------
> @@ -206,6 +210,9 @@ ABI Changes
> and hard expiry limits. Limits can be either in number of packets or bytes.
>
>
> +* ethdev: ``rte_flow_action_modify_data`` structure udpdated.
> +
> +
> Known Issues
> ------------
>
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> 7b1ed7f110..af4c693ead 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -3204,6 +3204,9 @@ enum rte_flow_field_id { };
>
> /**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> * Field description for MODIFY_FIELD action.
> */
> struct rte_flow_action_modify_data {
> @@ -3217,10 +3220,16 @@ struct rte_flow_action_modify_data {
> uint32_t offset;
> };
> /**
> - * Immediate value for RTE_FLOW_FIELD_VALUE or
> - * memory address for RTE_FLOW_FIELD_POINTER.
> + * Immediate value for RTE_FLOW_FIELD_VALUE, presented
> in the
> + * same byte order and length as in relevant
> rte_flow_item_xxx.
Please see my comment about how to get the offset.
> */
> - uint64_t value;
> + uint8_t value[16];
> + /*
> + * Memory address for RTE_FLOW_FIELD_POINTER, memory
> layout
> + * should be the same as for relevant field in the
> + * rte_flow_item_xxx structure.
I assume also in this case the offset comes from the dest right?
> + */
> + void *pvalue;
> };
> };
>
> --
> 2.18.1
Thanks,
Ori
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
2021-10-04 8:46 3% ` Ferruh Yigit
@ 2021-10-04 9:20 0% ` Ananyev, Konstantin
2021-10-04 10:13 3% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-04 9:20 UTC (permalink / raw)
To: Yigit, Ferruh, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay
> >>
> >>> static inline int
> >>> rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >>> {
> >>> - struct rte_eth_dev *dev;
> >>> + struct rte_eth_fp_ops *p;
> >>> + void *qd;
> >>> +
> >>> + if (port_id >= RTE_MAX_ETHPORTS ||
> >>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> >>> + RTE_ETHDEV_LOG(ERR,
> >>> + "Invalid port_id=%u or queue_id=%u\n",
> >>> + port_id, queue_id);
> >>> + return -EINVAL;
> >>> + }
> >>
> >> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
> >
> > Original rte_eth_rx_queue_count() always have similar checks enabled,
> > that's why I also kept them 'always on'.
> >
> >>
> >> <...>
> >>
> >>> +++ b/lib/ethdev/version.map
> >>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
> >>> rte_mtr_meter_policy_delete;
> >>> rte_mtr_meter_policy_update;
> >>> rte_mtr_meter_policy_validate;
> >>> +
> >>> + # added in 21.05
> >>
> >> s/21.05/21.11/
> >>
> >>> + __rte_eth_rx_epilog;
> >>> + __rte_eth_tx_prolog;
> >>
> >> These are directly called by application and must be part of ABI, but marked as
> >> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> >> What about making them proper, non-internal, API?
> >
> > Hmm not sure what do you suggest here.
> > We don't want users to call them explicitly.
> > They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> > So I did what I thought is our usual policy for such semi-internal thigns:
> > have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> > section.
> >
> > What do you think it should be instead?
> >
>
> Make them public API. (Basically just remove '__' prefix and @internal comment).
>
> This way application can use them to run custom callback(s) (not only the
> registered ones), not sure if this can be dangerous though.
Hmm, as I said above, I don't want users to call them explicitly.
Do you have any good reason to allow it?
>
> We need to trace the ABI for these functions, making them public clarifies it.
We do have plenty of semi-internal functions right now,
why adding that one will be a problem?
From other side - if we'll declare it public, we will have obligations to support it
in future releases, plus it might encourage users to use it on its own.
To me that sounds like extra headache without any gain in return.
> Also comment can be updated to describe intended usage instead of marking them
> internal, and applications can use these anyway if we mark them internal or not.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
2021-10-01 17:40 0% ` Ananyev, Konstantin
@ 2021-10-04 8:46 3% ` Ferruh Yigit
2021-10-04 9:20 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-04 8:46 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay
On 10/1/2021 6:40 PM, Ananyev, Konstantin wrote:
>
>
>> On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
>>> Rework 'fast' burst functions to use rte_eth_fp_ops[].
>>> While it is an API/ABI breakage, this change is intended to be
>>> transparent for both users (no changes in user app is required) and
>>> PMD developers (no changes in PMD is required).
>>> One extra thing to note - RX/TX callback invocation will cause extra
>>> function call with these changes. That might cause some insignificant
>>> slowdown for code-path where RX/TX callbacks are heavily involved.
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>
>> <...>
>>
>>> static inline int
>>> rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>>> {
>>> - struct rte_eth_dev *dev;
>>> + struct rte_eth_fp_ops *p;
>>> + void *qd;
>>> +
>>> + if (port_id >= RTE_MAX_ETHPORTS ||
>>> + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Invalid port_id=%u or queue_id=%u\n",
>>> + port_id, queue_id);
>>> + return -EINVAL;
>>> + }
>>
>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
>
> Original rte_eth_rx_queue_count() always have similar checks enabled,
> that's why I also kept them 'always on'.
>
>>
>> <...>
>>
>>> +++ b/lib/ethdev/version.map
>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
>>> rte_mtr_meter_policy_delete;
>>> rte_mtr_meter_policy_update;
>>> rte_mtr_meter_policy_validate;
>>> +
>>> + # added in 21.05
>>
>> s/21.05/21.11/
>>
>>> + __rte_eth_rx_epilog;
>>> + __rte_eth_tx_prolog;
>>
>> These are directly called by application and must be part of ABI, but marked as
>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
>> What about making them proper, non-internal, API?
>
> Hmm not sure what do you suggest here.
> We don't want users to call them explicitly.
> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> So I did what I thought is our usual policy for such semi-internal thigns:
> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> section.
>
> What do you think it should be instead?
>
Make them public API. (Basically just remove '__' prefix and @internal comment).
This way application can use them to run custom callback(s) (not only the
registered ones), not sure if this can be dangerous though.
We need to trace the ABI for these functions, making them public clarifies it.
Also comment can be updated to describe intended usage instead of marking them
internal, and applications can use these anyway if we mark them internal or not.
>>> };
>>>
>>> INTERNAL {
>>> global:
>>>
>>> + rte_eth_fp_ops;
>>
>> This variable is accessed in inline function, so accessed by application, not
>> sure if it suits the 'internal' object definition, internal should be only for
>> objects accessed by other parts of DPDK.
>> I think this can be added to 'DPDK_22'.
>>
>>> rte_eth_dev_allocate;
>>> rte_eth_dev_allocated;
>>> rte_eth_dev_attach_secondary;
>>>
>
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH] cryptodev: extend data-unit length field
@ 2021-10-04 6:36 12% Matan Azrad
0 siblings, 0 replies; 200+ results
From: Matan Azrad @ 2021-10-04 6:36 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty; +Cc: dev, Thomas Monjalon
As described in [1] and as announced in [2], The field ``dataunit_len``
of the ``struct rte_crypto_cipher_xform`` moved to the end of the
structure and extended to ``uint32_t``.
In this way, sizes bigger than 64K bytes can be supported for data-unit
lengths.
[1] commit d014dddb2d69 ("cryptodev: support multiple cipher
data-units")
[2] commit 9a5c09211b3a ("doc: announce extension of crypto data-unit
length")
Signed-off-by: Matan Azrad <matan@nvidia.com>
---
app/test/test_cryptodev_blockcipher.h | 2 +-
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_21_11.rst | 3 +++
examples/l2fwd-crypto/main.c | 6 ++---
lib/cryptodev/rte_crypto_sym.h | 36 +++++++++-----------------
5 files changed, 19 insertions(+), 32 deletions(-)
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index dcaa08ae22..84f5d57787 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -97,7 +97,7 @@ struct blockcipher_test_data {
unsigned int cipher_offset;
unsigned int auth_offset;
- uint16_t xts_dataunit_len;
+ uint32_t xts_dataunit_len;
bool wrapped_key;
};
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..8b54088a39 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -202,10 +202,6 @@ Deprecation Notices
* cryptodev: ``min`` and ``max`` fields of ``rte_crypto_param_range`` structure
will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets headers.
-* cryptodev: The field ``dataunit_len`` of the ``struct rte_crypto_cipher_xform``
- has a limited size ``uint16_t``.
- It will be moved and extended as ``uint32_t`` in DPDK 21.11.
-
* cryptodev: The structure ``rte_crypto_sym_vec`` would be updated to add
``dest_sgl`` to support out of place processing.
This field will be null for inplace processing.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 37dc1a7786..4a9d1dedd8 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -190,6 +190,9 @@ ABI Changes
Use fixed width quotes for ``function_names`` or ``struct_names``.
Use the past tense.
+* cryptodev: The field ``dataunit_len`` of the ``struct rte_crypto_cipher_xform``
+ moved to the end of the structure and extended to ``uint32_t``.
+
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=======================================================
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf7..78844cee18 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -182,7 +182,7 @@ struct l2fwd_crypto_params {
unsigned digest_length;
unsigned block_size;
- uint16_t cipher_dataunit_len;
+ uint32_t cipher_dataunit_len;
struct l2fwd_iv cipher_iv;
struct l2fwd_iv auth_iv;
@@ -1269,9 +1269,9 @@ l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
else if (strcmp(lgopts[option_index].name, "cipher_dataunit_len") == 0) {
retval = parse_size(&val, optarg);
- if (retval == 0 && val >= 0 && val <= UINT16_MAX) {
+ if (retval == 0 && val >= 0) {
options->cipher_xform.cipher.dataunit_len =
- (uint16_t)val;
+ (uint32_t)val;
return 0;
} else
return -1;
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index 58c0724743..1106ad6201 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -195,9 +195,6 @@ struct rte_crypto_cipher_xform {
enum rte_crypto_cipher_algorithm algo;
/**< Cipher algorithm */
- RTE_STD_C11
- union { /* temporary anonymous union for ABI compatibility */
-
struct {
const uint8_t *data; /**< pointer to key data */
uint16_t length; /**< key length in bytes */
@@ -233,27 +230,6 @@ struct rte_crypto_cipher_xform {
* - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
* - Both keys must have the same size.
**/
-
- RTE_STD_C11
- struct { /* temporary anonymous struct for ABI compatibility */
- const uint8_t *_key_data; /* reserved for key.data union */
- uint16_t _key_length; /* reserved for key.length union */
- /* next field can fill the padding hole */
-
- uint16_t dataunit_len;
- /**< When RTE_CRYPTODEV_FF_CIPHER_MULTIPLE_DATA_UNITS is enabled,
- * this is the data-unit length of the algorithm,
- * otherwise or when the value is 0, use the operation length.
- * The value should be in the range defined by the dataunit_set field
- * in the cipher capability.
- *
- * - For AES-XTS it is the size of data-unit, from IEEE Std 1619-2007.
- * For-each data-unit in the operation, the tweak (IV) value is
- * assigned consecutively starting from the operation assigned IV.
- */
-
- }; }; /* temporary struct nested in union for ABI compatibility */
-
struct {
uint16_t offset;
/**< Starting point for Initialisation Vector or Counter,
@@ -297,6 +273,18 @@ struct rte_crypto_cipher_xform {
* which can be in the range 7 to 13 inclusive.
*/
} iv; /**< Initialisation vector parameters */
+
+ uint32_t dataunit_len;
+ /**< When RTE_CRYPTODEV_FF_CIPHER_MULTIPLE_DATA_UNITS is enabled,
+ * this is the data-unit length of the algorithm,
+ * otherwise or when the value is 0, use the operation length.
+ * The value should be in the range defined by the dataunit_set field
+ * in the cipher capability.
+ *
+ * - For AES-XTS it is the size of data-unit, from IEEE Std 1619-2007.
+ * For-each data-unit in the operation, the tweak (IV) value is
+ * assigned consecutively starting from the operation assigned IV.
+ */
};
/** Symmetric Authentication / Hash Algorithms
--
2.25.1
^ permalink raw reply [relevance 12%]
* [dpdk-dev] [PATCH v4 1/2] hash: split x86 and SW hash CRC intrinsics
2021-10-03 23:00 1% ` [dpdk-dev] [PATCH v3 1/2] hash: split x86 and SW hash CRC intrinsics pbhagavatula
@ 2021-10-04 5:52 1% ` pbhagavatula
0 siblings, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-04 5:52 UTC (permalink / raw)
To: ruifeng.wang, konstantin.ananyev, jerinj, Yipeng Wang,
Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Split x86 and SW hash crc intrinsics into a separate files.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
v4 Changes:
- Fix compilation issues
v3 Changes:
- Split x86 and SW hash crc functions into separate files.
- Rename `rte_crc_arm64.h` to `hash_crc_arm64.h` as it is internal and not
installed by meson build.
v2 Changes:
- Don't remove `rte_crc_arm64.h` for ABI purposes.
- Revert function pointer approach for performance reasons.
- Select the best available algorithm based on the arch when user passes an
unsupported crc32 algorithm.
lib/hash/hash_crc_sw.h | 419 ++++++++++++++++++++++++++++++++++++++++
lib/hash/hash_crc_x86.h | 62 ++++++
lib/hash/rte_hash_crc.h | 396 +------------------------------------
3 files changed, 483 insertions(+), 394 deletions(-)
create mode 100644 lib/hash/hash_crc_sw.h
create mode 100644 lib/hash/hash_crc_x86.h
diff --git a/lib/hash/hash_crc_sw.h b/lib/hash/hash_crc_sw.h
new file mode 100644
index 0000000000..4790a0970b
--- /dev/null
+++ b/lib/hash/hash_crc_sw.h
@@ -0,0 +1,419 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _HASH_CRC_SW_H_
+#define _HASH_CRC_SW_H_
+
+/* Lookup tables for software implementation of CRC32C */
+static const uint32_t crc32c_tables[8][256] = {
+ {0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C,
+ 0x26A1E7E8, 0xD4CA64EB, 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B,
+ 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24, 0x105EC76F, 0xE235446C,
+ 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
+ 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC,
+ 0xBC267848, 0x4E4DFB4B, 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A,
+ 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35, 0xAA64D611, 0x580F5512,
+ 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
+ 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD,
+ 0x1642AE59, 0xE4292D5A, 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A,
+ 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595, 0x417B1DBC, 0xB3109EBF,
+ 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
+ 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F,
+ 0xED03A29B, 0x1F682198, 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927,
+ 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38, 0xDBFC821C, 0x2997011F,
+ 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
+ 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E,
+ 0x4767748A, 0xB50CF789, 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859,
+ 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46, 0x7198540D, 0x83F3D70E,
+ 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
+ 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE,
+ 0xDDE0EB2A, 0x2F8B6829, 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C,
+ 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93, 0x082F63B7, 0xFA44E0B4,
+ 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
+ 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B,
+ 0xB4091BFF, 0x466298FC, 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C,
+ 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033, 0xA24BB5A6, 0x502036A5,
+ 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
+ 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975,
+ 0x0E330A81, 0xFC588982, 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D,
+ 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622, 0x38CC2A06, 0xCAA7A905,
+ 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
+ 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8,
+ 0xE52CC12C, 0x1747422F, 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF,
+ 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0, 0xD3D3E1AB, 0x21B862A8,
+ 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
+ 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78,
+ 0x7FAB5E8C, 0x8DC0DD8F, 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE,
+ 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1, 0x69E9F0D5, 0x9B8273D6,
+ 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
+ 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69,
+ 0xD5CF889D, 0x27A40B9E, 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E,
+ 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351},
+ {0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB,
+ 0x69CF5132, 0x7A6DC945, 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21,
+ 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD, 0x3FC5F181, 0x2C6769F6,
+ 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
+ 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92,
+ 0xCB1E630B, 0xD8BCFB7C, 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B,
+ 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47, 0xE29F20BA, 0xF13DB8CD,
+ 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
+ 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28,
+ 0x298143B1, 0x3A23DBC6, 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2,
+ 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E, 0xFF17C604, 0xECB55E73,
+ 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
+ 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17,
+ 0x0BCC548E, 0x186ECCF9, 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C,
+ 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0, 0x5DC6F43D, 0x4E646C4A,
+ 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
+ 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD,
+ 0xE9537434, 0xFAF1EC43, 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27,
+ 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB, 0xBF59D487, 0xACFB4CF0,
+ 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
+ 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94,
+ 0x4B82460D, 0x5820DE7A, 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260,
+ 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC, 0x66D73941, 0x7575A136,
+ 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
+ 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3,
+ 0xADC95A4A, 0xBE6BC23D, 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059,
+ 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185, 0x844819FB, 0x97EA818C,
+ 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
+ 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8,
+ 0x70938B71, 0x63311306, 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3,
+ 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F, 0x26992BC2, 0x353BB3B5,
+ 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
+ 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556,
+ 0x6D1B6DCF, 0x7EB9F5B8, 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC,
+ 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600, 0x3B11CD7C, 0x28B3550B,
+ 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
+ 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F,
+ 0xCFCA5FF6, 0xDC68C781, 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766,
+ 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA, 0xE64B1C47, 0xF5E98430,
+ 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
+ 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5,
+ 0x2D557F4C, 0x3EF7E73B, 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F,
+ 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483},
+ {0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664,
+ 0xD1B1F617, 0x74F06469, 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6,
+ 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC, 0x70A27D8A, 0xD5E3EFF4,
+ 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
+ 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B,
+ 0x9942B558, 0x3C032726, 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67,
+ 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D, 0xD915C5D1, 0x7C5457AF,
+ 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
+ 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA,
+ 0x40577089, 0xE516E2F7, 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828,
+ 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32, 0xC76580D9, 0x622412A7,
+ 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
+ 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878,
+ 0x2E85480B, 0x8BC4DA75, 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20,
+ 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A, 0x8F96C396, 0x2AD751E8,
+ 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
+ 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9,
+ 0xF7908DDA, 0x52D11FA4, 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B,
+ 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161, 0x56830647, 0xF3C29439,
+ 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
+ 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6,
+ 0xBF63CE95, 0x1A225CEB, 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730,
+ 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A, 0xB3764986, 0x1637DBF8,
+ 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
+ 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD,
+ 0x2A34FCDE, 0x8F756EA0, 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F,
+ 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065, 0x6A638C57, 0xCF221E29,
+ 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
+ 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6,
+ 0x83834485, 0x26C2D6FB, 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE,
+ 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4, 0x2290CF18, 0x87D15D66,
+ 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
+ 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE,
+ 0x9DF3018D, 0x38B293F3, 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C,
+ 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36, 0x3CE08A10, 0x99A1186E,
+ 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
+ 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1,
+ 0xD50042C2, 0x7041D0BC, 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD,
+ 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7, 0x9557324B, 0x3016A035,
+ 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
+ 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760,
+ 0x0C158713, 0xA954156D, 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2,
+ 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8},
+ {0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B,
+ 0xC4451272, 0x1900B8CA, 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF,
+ 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C, 0xE964B13D, 0x34211B85,
+ 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
+ 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990,
+ 0xDB65C0A9, 0x06206A11, 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2,
+ 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41, 0x2161776D, 0xFC24DDD5,
+ 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
+ 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD,
+ 0xFA04B7C4, 0x27411D7C, 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69,
+ 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A, 0xABA65FE7, 0x76E3F55F,
+ 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
+ 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A,
+ 0x99A72E73, 0x44E284CB, 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3,
+ 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610, 0xB4868D3C, 0x69C32784,
+ 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
+ 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027,
+ 0xB8C6591E, 0x6583F3A6, 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3,
+ 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040, 0x95E7FA51, 0x48A250E9,
+ 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
+ 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC,
+ 0xA7E68BC5, 0x7AA3217D, 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006,
+ 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5, 0xA4E4AAD9, 0x79A10061,
+ 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
+ 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349,
+ 0x7F816A70, 0xA2C4C0C8, 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD,
+ 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E, 0x8585DDB4, 0x58C0770C,
+ 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
+ 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519,
+ 0xB784AC20, 0x6AC10698, 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0,
+ 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443, 0x9AA50F6F, 0x47E0A5D7,
+ 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
+ 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93,
+ 0x3D4384AA, 0xE0062E12, 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07,
+ 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4, 0x106227E5, 0xCD278D5D,
+ 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
+ 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48,
+ 0x22635671, 0xFF26FCC9, 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A,
+ 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99, 0xD867E1B5, 0x05224B0D,
+ 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
+ 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825,
+ 0x0302211C, 0xDE478BA4, 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1,
+ 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842},
+ {0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C,
+ 0x906761E8, 0xA8760E44, 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65,
+ 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5, 0x8F2261D3, 0xB7330E7F,
+ 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
+ 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E,
+ 0xDA220BAA, 0xE2336406, 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3,
+ 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13, 0xDECFBEC6, 0xE6DED16A,
+ 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
+ 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598,
+ 0x04EDB56C, 0x3CFCDAC0, 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1,
+ 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151, 0x37516AAE, 0x0F400502,
+ 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
+ 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023,
+ 0x625100D7, 0x5A406F7B, 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89,
+ 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539, 0x7D1400EC, 0x45056F40,
+ 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
+ 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5,
+ 0xBC9EBE11, 0x848FD1BD, 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C,
+ 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C, 0xA3DBBE2A, 0x9BCAD186,
+ 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
+ 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7,
+ 0xF6DBD453, 0xCECABBFF, 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8,
+ 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18, 0xABC5DECD, 0x93D4B161,
+ 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
+ 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593,
+ 0x71E7D567, 0x49F6BACB, 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA,
+ 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A, 0x750A600B, 0x4D1B0FA7,
+ 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
+ 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86,
+ 0x200A0A72, 0x181B65DE, 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C,
+ 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C, 0x3F4F0A49, 0x075E65E5,
+ 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
+ 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE,
+ 0xC994DE1A, 0xF185B1B6, 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497,
+ 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27, 0xD6D1DE21, 0xEEC0B18D,
+ 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
+ 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC,
+ 0x83D1B458, 0xBBC0DBF4, 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51,
+ 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1, 0x873C0134, 0xBF2D6E98,
+ 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
+ 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A,
+ 0x5D1E0A9E, 0x650F6532, 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013,
+ 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3},
+ {0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E,
+ 0x697997B4, 0x8649FCAD, 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5,
+ 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2, 0xC00C303E, 0x2F3C5B27,
+ 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
+ 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F,
+ 0xC973BF95, 0x2643D48C, 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57,
+ 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20, 0xE5F20E92, 0x0AC2658B,
+ 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
+ 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD,
+ 0x2C81B107, 0xC3B1DA1E, 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576,
+ 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201, 0x0E045BEB, 0xE13430F2,
+ 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
+ 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A,
+ 0x077BD440, 0xE84BBF59, 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F,
+ 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778, 0xAE0E73CA, 0x413E18D3,
+ 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
+ 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108,
+ 0xE289DAD2, 0x0DB9B1CB, 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3,
+ 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4, 0x4BFC7D58, 0xA4CC1641,
+ 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
+ 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929,
+ 0x4283F2F3, 0xADB399EA, 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C,
+ 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B, 0x7C0EAFC9, 0x933EC4D0,
+ 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
+ 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86,
+ 0xB57D105C, 0x5A4D7B45, 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D,
+ 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A, 0x99FCA15B, 0x76CCCA42,
+ 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
+ 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A,
+ 0x90832EF0, 0x7FB345E9, 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF,
+ 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8, 0x39F6897A, 0xD6C6E263,
+ 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
+ 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053,
+ 0x7B757B89, 0x94451090, 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8,
+ 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F, 0xD200DC03, 0x3D30B71A,
+ 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
+ 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872,
+ 0xDB7F53A8, 0x344F38B1, 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A,
+ 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D, 0xF7FEE2AF, 0x18CE89B6,
+ 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
+ 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0,
+ 0x3E8D5D3A, 0xD1BD3623, 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B,
+ 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C},
+ {0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919,
+ 0x75E69C41, 0x1DE5B089, 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B,
+ 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA, 0x9C5BFAA6, 0xF458D66E,
+ 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
+ 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC,
+ 0xA7909BB4, 0xCF93B77C, 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5,
+ 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334, 0x73767EEE, 0x1B755226,
+ 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
+ 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002,
+ 0xD4E6E55A, 0xBCE5C992, 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110,
+ 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1, 0x7AB7077A, 0x12B42BB2,
+ 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
+ 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330,
+ 0x417C6668, 0x297F4AA0, 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884,
+ 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55, 0xA8C1008F, 0xC0C22C47,
+ 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
+ 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE,
+ 0x320A1886, 0x5A09344E, 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC,
+ 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D, 0xDBB77E61, 0xB3B452A9,
+ 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
+ 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B,
+ 0xE07C1F73, 0x887F33BB, 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC,
+ 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D, 0xBB43F3A7, 0xD340DF6F,
+ 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
+ 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B,
+ 0x1CD36813, 0x74D044DB, 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59,
+ 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988, 0xC8358D49, 0xA036A181,
+ 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
+ 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903,
+ 0xF3FEEC5B, 0x9BFDC093, 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7,
+ 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766, 0x1A438ABC, 0x7240A674,
+ 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
+ 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097,
+ 0xFA3F95CF, 0x923CB907, 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185,
+ 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454, 0x1382F328, 0x7B81DFE0,
+ 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
+ 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762,
+ 0x2849923A, 0x404ABEF2, 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B,
+ 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA, 0xFCAF7760, 0x94AC5BA8,
+ 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
+ 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C,
+ 0x5B3FECD4, 0x333CC01C, 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E,
+ 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F},
+ {0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A,
+ 0xB3657823, 0xFA590504, 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3,
+ 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE, 0x847609B4, 0xCD4A7493,
+ 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
+ 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224,
+ 0x7528754D, 0x3C14086A, 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0,
+ 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D, 0x4F3B6143, 0x06071C64,
+ 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
+ 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367,
+ 0x3A13140E, 0x732F6929, 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E,
+ 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3, 0x1A00CB32, 0x533CB615,
+ 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
+ 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2,
+ 0xEB5EB7CB, 0xA262CAEC, 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF,
+ 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782, 0xDC4DC65C, 0x9571BB7B,
+ 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
+ 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1,
+ 0xA465D688, 0xED59ABAF, 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18,
+ 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75, 0x9376A71F, 0xDA4ADA38,
+ 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
+ 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F,
+ 0x6228DBE6, 0x2B14A6C1, 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D,
+ 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360, 0x763A92BE, 0x3F06EF99,
+ 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
+ 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A,
+ 0x0312E7F3, 0x4A2E9AD4, 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63,
+ 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E, 0x3901F3FD, 0x703D8EDA,
+ 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
+ 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D,
+ 0xC85F8F04, 0x8163F223, 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20,
+ 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D, 0xFF4CFE93, 0xB67083B4,
+ 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
+ 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C,
+ 0x9D642575, 0xD4585852, 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5,
+ 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88, 0xAA7754E2, 0xE34B29C5,
+ 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
+ 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72,
+ 0x5B29281B, 0x1215553C, 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6,
+ 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB, 0x613A3C15, 0x28064132,
+ 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
+ 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31,
+ 0x14124958, 0x5D2E347F, 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8,
+ 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5},
+};
+
+#define CRC32_UPD(crc, n) \
+ (crc32c_tables[(n)][(crc)&0xFF] ^ \
+ crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
+
+static inline uint32_t
+crc32c_1byte(uint8_t data, uint32_t init_val)
+{
+ uint32_t crc;
+ crc = init_val;
+ crc ^= data;
+
+ return crc32c_tables[0][crc & 0xff] ^ (crc >> 8);
+}
+
+static inline uint32_t
+crc32c_2bytes(uint16_t data, uint32_t init_val)
+{
+ uint32_t crc;
+ crc = init_val;
+ crc ^= data;
+
+ crc = CRC32_UPD(crc, 1) ^ (crc >> 16);
+
+ return crc;
+}
+
+static inline uint32_t
+crc32c_1word(uint32_t data, uint32_t init_val)
+{
+ uint32_t crc, term1, term2;
+ crc = init_val;
+ crc ^= data;
+
+ term1 = CRC32_UPD(crc, 3);
+ term2 = crc >> 16;
+ crc = term1 ^ CRC32_UPD(term2, 1);
+
+ return crc;
+}
+
+static inline uint32_t
+crc32c_2words(uint64_t data, uint32_t init_val)
+{
+ uint32_t crc, term1, term2;
+ union {
+ uint64_t u64;
+ uint32_t u32[2];
+ } d;
+ d.u64 = data;
+
+ crc = init_val;
+ crc ^= d.u32[0];
+
+ term1 = CRC32_UPD(crc, 7);
+ term2 = crc >> 16;
+ crc = term1 ^ CRC32_UPD(term2, 5);
+ term1 = CRC32_UPD(d.u32[1], 3);
+ term2 = d.u32[1] >> 16;
+ crc ^= term1 ^ CRC32_UPD(term2, 1);
+
+ return crc;
+}
+
+#endif /* HASH_CRC_SW */
diff --git a/lib/hash/hash_crc_x86.h b/lib/hash/hash_crc_x86.h
new file mode 100644
index 0000000000..b80a742afa
--- /dev/null
+++ b/lib/hash/hash_crc_x86.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _HASH_CRC_X86_H_
+#define _HASH_CRC_X86_H_
+
+static inline uint32_t
+crc32c_sse42_u8(uint8_t data, uint32_t init_val)
+{
+ __asm__ volatile(
+ "crc32b %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u16(uint16_t data, uint32_t init_val)
+{
+ __asm__ volatile(
+ "crc32w %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u32(uint32_t data, uint32_t init_val)
+{
+ __asm__ volatile(
+ "crc32l %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
+{
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } d;
+
+ d.u64 = data;
+ init_val = crc32c_sse42_u32(d.u32[0], (uint32_t)init_val);
+ init_val = crc32c_sse42_u32(d.u32[1], (uint32_t)init_val);
+ return (uint32_t)init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64(uint64_t data, uint64_t init_val)
+{
+ __asm__ volatile(
+ "crc32q %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return (uint32_t)init_val;
+}
+
+#endif
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 3e131aa6bb..1cc8f84fe2 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -21,400 +21,7 @@ extern "C" {
#include <rte_branch_prediction.h>
#include <rte_common.h>
-/* Lookup tables for software implementation of CRC32C */
-static const uint32_t crc32c_tables[8][256] = {{
- 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB,
- 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24,
- 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
- 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B,
- 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35,
- 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
- 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A,
- 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595,
- 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
- 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198,
- 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38,
- 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
- 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789,
- 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46,
- 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
- 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829,
- 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93,
- 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
- 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC,
- 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033,
- 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
- 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982,
- 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622,
- 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
- 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F,
- 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0,
- 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
- 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F,
- 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1,
- 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
- 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E,
- 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351
-},
-{
- 0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB, 0x69CF5132, 0x7A6DC945,
- 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21, 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD,
- 0x3FC5F181, 0x2C6769F6, 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
- 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92, 0xCB1E630B, 0xD8BCFB7C,
- 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B, 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47,
- 0xE29F20BA, 0xF13DB8CD, 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
- 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28, 0x298143B1, 0x3A23DBC6,
- 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2, 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E,
- 0xFF17C604, 0xECB55E73, 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
- 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17, 0x0BCC548E, 0x186ECCF9,
- 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C, 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0,
- 0x5DC6F43D, 0x4E646C4A, 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
- 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD, 0xE9537434, 0xFAF1EC43,
- 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27, 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB,
- 0xBF59D487, 0xACFB4CF0, 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
- 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94, 0x4B82460D, 0x5820DE7A,
- 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260, 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC,
- 0x66D73941, 0x7575A136, 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
- 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3, 0xADC95A4A, 0xBE6BC23D,
- 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059, 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185,
- 0x844819FB, 0x97EA818C, 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
- 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8, 0x70938B71, 0x63311306,
- 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3, 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F,
- 0x26992BC2, 0x353BB3B5, 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
- 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556, 0x6D1B6DCF, 0x7EB9F5B8,
- 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC, 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600,
- 0x3B11CD7C, 0x28B3550B, 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
- 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F, 0xCFCA5FF6, 0xDC68C781,
- 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766, 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA,
- 0xE64B1C47, 0xF5E98430, 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
- 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5, 0x2D557F4C, 0x3EF7E73B,
- 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F, 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483
-},
-{
- 0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664, 0xD1B1F617, 0x74F06469,
- 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6, 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC,
- 0x70A27D8A, 0xD5E3EFF4, 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
- 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B, 0x9942B558, 0x3C032726,
- 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67, 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D,
- 0xD915C5D1, 0x7C5457AF, 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
- 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA, 0x40577089, 0xE516E2F7,
- 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828, 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32,
- 0xC76580D9, 0x622412A7, 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
- 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878, 0x2E85480B, 0x8BC4DA75,
- 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20, 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A,
- 0x8F96C396, 0x2AD751E8, 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
- 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9, 0xF7908DDA, 0x52D11FA4,
- 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B, 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161,
- 0x56830647, 0xF3C29439, 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
- 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6, 0xBF63CE95, 0x1A225CEB,
- 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730, 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A,
- 0xB3764986, 0x1637DBF8, 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
- 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD, 0x2A34FCDE, 0x8F756EA0,
- 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F, 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065,
- 0x6A638C57, 0xCF221E29, 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
- 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6, 0x83834485, 0x26C2D6FB,
- 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE, 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4,
- 0x2290CF18, 0x87D15D66, 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
- 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE, 0x9DF3018D, 0x38B293F3,
- 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C, 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36,
- 0x3CE08A10, 0x99A1186E, 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
- 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1, 0xD50042C2, 0x7041D0BC,
- 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD, 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7,
- 0x9557324B, 0x3016A035, 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
- 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760, 0x0C158713, 0xA954156D,
- 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2, 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8
-},
-{
- 0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B, 0xC4451272, 0x1900B8CA,
- 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF, 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C,
- 0xE964B13D, 0x34211B85, 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
- 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990, 0xDB65C0A9, 0x06206A11,
- 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2, 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41,
- 0x2161776D, 0xFC24DDD5, 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
- 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD, 0xFA04B7C4, 0x27411D7C,
- 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69, 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A,
- 0xABA65FE7, 0x76E3F55F, 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
- 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A, 0x99A72E73, 0x44E284CB,
- 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3, 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610,
- 0xB4868D3C, 0x69C32784, 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
- 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027, 0xB8C6591E, 0x6583F3A6,
- 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3, 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040,
- 0x95E7FA51, 0x48A250E9, 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
- 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC, 0xA7E68BC5, 0x7AA3217D,
- 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006, 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5,
- 0xA4E4AAD9, 0x79A10061, 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
- 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349, 0x7F816A70, 0xA2C4C0C8,
- 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD, 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E,
- 0x8585DDB4, 0x58C0770C, 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
- 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519, 0xB784AC20, 0x6AC10698,
- 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0, 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443,
- 0x9AA50F6F, 0x47E0A5D7, 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
- 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93, 0x3D4384AA, 0xE0062E12,
- 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07, 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4,
- 0x106227E5, 0xCD278D5D, 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
- 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48, 0x22635671, 0xFF26FCC9,
- 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A, 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99,
- 0xD867E1B5, 0x05224B0D, 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
- 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825, 0x0302211C, 0xDE478BA4,
- 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1, 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842
-},
-{
- 0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C, 0x906761E8, 0xA8760E44,
- 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65, 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5,
- 0x8F2261D3, 0xB7330E7F, 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
- 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E, 0xDA220BAA, 0xE2336406,
- 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3, 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13,
- 0xDECFBEC6, 0xE6DED16A, 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
- 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598, 0x04EDB56C, 0x3CFCDAC0,
- 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1, 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151,
- 0x37516AAE, 0x0F400502, 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
- 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023, 0x625100D7, 0x5A406F7B,
- 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89, 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539,
- 0x7D1400EC, 0x45056F40, 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
- 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5, 0xBC9EBE11, 0x848FD1BD,
- 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C, 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C,
- 0xA3DBBE2A, 0x9BCAD186, 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
- 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7, 0xF6DBD453, 0xCECABBFF,
- 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8, 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18,
- 0xABC5DECD, 0x93D4B161, 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
- 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593, 0x71E7D567, 0x49F6BACB,
- 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA, 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A,
- 0x750A600B, 0x4D1B0FA7, 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
- 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86, 0x200A0A72, 0x181B65DE,
- 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C, 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C,
- 0x3F4F0A49, 0x075E65E5, 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
- 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE, 0xC994DE1A, 0xF185B1B6,
- 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497, 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27,
- 0xD6D1DE21, 0xEEC0B18D, 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
- 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC, 0x83D1B458, 0xBBC0DBF4,
- 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51, 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1,
- 0x873C0134, 0xBF2D6E98, 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
- 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A, 0x5D1E0A9E, 0x650F6532,
- 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013, 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3
-},
-{
- 0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E, 0x697997B4, 0x8649FCAD,
- 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5, 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2,
- 0xC00C303E, 0x2F3C5B27, 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
- 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F, 0xC973BF95, 0x2643D48C,
- 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57, 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20,
- 0xE5F20E92, 0x0AC2658B, 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
- 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD, 0x2C81B107, 0xC3B1DA1E,
- 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576, 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201,
- 0x0E045BEB, 0xE13430F2, 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
- 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A, 0x077BD440, 0xE84BBF59,
- 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F, 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778,
- 0xAE0E73CA, 0x413E18D3, 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
- 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108, 0xE289DAD2, 0x0DB9B1CB,
- 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3, 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4,
- 0x4BFC7D58, 0xA4CC1641, 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
- 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929, 0x4283F2F3, 0xADB399EA,
- 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C, 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B,
- 0x7C0EAFC9, 0x933EC4D0, 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
- 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86, 0xB57D105C, 0x5A4D7B45,
- 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D, 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A,
- 0x99FCA15B, 0x76CCCA42, 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
- 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A, 0x90832EF0, 0x7FB345E9,
- 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF, 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8,
- 0x39F6897A, 0xD6C6E263, 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
- 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053, 0x7B757B89, 0x94451090,
- 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8, 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F,
- 0xD200DC03, 0x3D30B71A, 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
- 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872, 0xDB7F53A8, 0x344F38B1,
- 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A, 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D,
- 0xF7FEE2AF, 0x18CE89B6, 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
- 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0, 0x3E8D5D3A, 0xD1BD3623,
- 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B, 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C
-},
-{
- 0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919, 0x75E69C41, 0x1DE5B089,
- 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B, 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA,
- 0x9C5BFAA6, 0xF458D66E, 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
- 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC, 0xA7909BB4, 0xCF93B77C,
- 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5, 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334,
- 0x73767EEE, 0x1B755226, 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
- 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002, 0xD4E6E55A, 0xBCE5C992,
- 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110, 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1,
- 0x7AB7077A, 0x12B42BB2, 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
- 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330, 0x417C6668, 0x297F4AA0,
- 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884, 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55,
- 0xA8C1008F, 0xC0C22C47, 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
- 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE, 0x320A1886, 0x5A09344E,
- 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC, 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D,
- 0xDBB77E61, 0xB3B452A9, 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
- 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B, 0xE07C1F73, 0x887F33BB,
- 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC, 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D,
- 0xBB43F3A7, 0xD340DF6F, 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
- 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B, 0x1CD36813, 0x74D044DB,
- 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59, 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988,
- 0xC8358D49, 0xA036A181, 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
- 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903, 0xF3FEEC5B, 0x9BFDC093,
- 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7, 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766,
- 0x1A438ABC, 0x7240A674, 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
- 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097, 0xFA3F95CF, 0x923CB907,
- 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185, 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454,
- 0x1382F328, 0x7B81DFE0, 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
- 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762, 0x2849923A, 0x404ABEF2,
- 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B, 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA,
- 0xFCAF7760, 0x94AC5BA8, 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
- 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C, 0x5B3FECD4, 0x333CC01C,
- 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E, 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F
-},
-{
- 0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A, 0xB3657823, 0xFA590504,
- 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3, 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE,
- 0x847609B4, 0xCD4A7493, 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
- 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224, 0x7528754D, 0x3C14086A,
- 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0, 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D,
- 0x4F3B6143, 0x06071C64, 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
- 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367, 0x3A13140E, 0x732F6929,
- 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E, 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3,
- 0x1A00CB32, 0x533CB615, 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
- 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2, 0xEB5EB7CB, 0xA262CAEC,
- 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF, 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782,
- 0xDC4DC65C, 0x9571BB7B, 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
- 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1, 0xA465D688, 0xED59ABAF,
- 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18, 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75,
- 0x9376A71F, 0xDA4ADA38, 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
- 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F, 0x6228DBE6, 0x2B14A6C1,
- 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D, 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360,
- 0x763A92BE, 0x3F06EF99, 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
- 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A, 0x0312E7F3, 0x4A2E9AD4,
- 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63, 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E,
- 0x3901F3FD, 0x703D8EDA, 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
- 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D, 0xC85F8F04, 0x8163F223,
- 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20, 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D,
- 0xFF4CFE93, 0xB67083B4, 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
- 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C, 0x9D642575, 0xD4585852,
- 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5, 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88,
- 0xAA7754E2, 0xE34B29C5, 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
- 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72, 0x5B29281B, 0x1215553C,
- 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6, 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB,
- 0x613A3C15, 0x28064132, 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
- 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31, 0x14124958, 0x5D2E347F,
- 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8, 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5
-}};
-
-#define CRC32_UPD(crc, n) \
- (crc32c_tables[(n)][(crc) & 0xFF] ^ \
- crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
-
-static inline uint32_t
-crc32c_1byte(uint8_t data, uint32_t init_val)
-{
- uint32_t crc;
- crc = init_val;
- crc ^= data;
-
- return crc32c_tables[0][crc & 0xff] ^ (crc >> 8);
-}
-
-static inline uint32_t
-crc32c_2bytes(uint16_t data, uint32_t init_val)
-{
- uint32_t crc;
- crc = init_val;
- crc ^= data;
-
- crc = CRC32_UPD(crc, 1) ^ (crc >> 16);
-
- return crc;
-}
-
-static inline uint32_t
-crc32c_1word(uint32_t data, uint32_t init_val)
-{
- uint32_t crc, term1, term2;
- crc = init_val;
- crc ^= data;
-
- term1 = CRC32_UPD(crc, 3);
- term2 = crc >> 16;
- crc = term1 ^ CRC32_UPD(term2, 1);
-
- return crc;
-}
-
-static inline uint32_t
-crc32c_2words(uint64_t data, uint32_t init_val)
-{
- uint32_t crc, term1, term2;
- union {
- uint64_t u64;
- uint32_t u32[2];
- } d;
- d.u64 = data;
-
- crc = init_val;
- crc ^= d.u32[0];
-
- term1 = CRC32_UPD(crc, 7);
- term2 = crc >> 16;
- crc = term1 ^ CRC32_UPD(term2, 5);
- term1 = CRC32_UPD(d.u32[1], 3);
- term2 = d.u32[1] >> 16;
- crc ^= term1 ^ CRC32_UPD(term2, 1);
-
- return crc;
-}
-
-#if defined(RTE_ARCH_X86)
-static inline uint32_t
-crc32c_sse42_u8(uint8_t data, uint32_t init_val)
-{
- __asm__ volatile(
- "crc32b %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u16(uint16_t data, uint32_t init_val)
-{
- __asm__ volatile(
- "crc32w %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u32(uint32_t data, uint32_t init_val)
-{
- __asm__ volatile(
- "crc32l %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
-{
- union {
- uint32_t u32[2];
- uint64_t u64;
- } d;
-
- d.u64 = data;
- init_val = crc32c_sse42_u32(d.u32[0], (uint32_t)init_val);
- init_val = crc32c_sse42_u32(d.u32[1], (uint32_t)init_val);
- return (uint32_t)init_val;
-}
-#endif
-
-#ifdef RTE_ARCH_X86_64
-static inline uint32_t
-crc32c_sse42_u64(uint64_t data, uint64_t init_val)
-{
- __asm__ volatile(
- "crc32q %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return (uint32_t)init_val;
-}
-#endif
+#include <hash_crc_sw.h>
#define CRC32_SW (1U << 0)
#define CRC32_SSE42 (1U << 1)
@@ -427,6 +34,7 @@ static uint8_t crc32_alg = CRC32_SW;
#if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
#include "rte_crc_arm64.h"
#else
+#include "hash_crc_x86.h"
/**
* Allow or disallow use of SSE4.2 instrinsics for CRC32 hash
--
2.17.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v3 1/2] hash: split x86 and SW hash CRC intrinsics
@ 2021-10-03 23:00 1% ` pbhagavatula
2021-10-04 5:52 1% ` [dpdk-dev] [PATCH v4 " pbhagavatula
0 siblings, 1 reply; 200+ results
From: pbhagavatula @ 2021-10-03 23:00 UTC (permalink / raw)
To: ruifeng.wang, konstantin.ananyev, jerinj, Yipeng Wang,
Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Split x86 and SW hash crc intrinsics into a separate files.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
v3 Changes:
- Split x86 and SW hash crc functions into separate files.
- Rename `rte_crc_arm64.h` to `hash_crc_arm64.h` as it is internal and not
installed by meson build.
v2 Changes:
- Don't remove `rte_crc_arm64.h` for ABI purposes.
- Revert function pointer approach for performance reasons.
- Select the best available algorithm based on the arch when user passes an
unsupported crc32 algorithm.
lib/hash/hash_crc_sw.h | 419 ++++++++++++++++++++++++++++++++++++++++
lib/hash/hash_crc_x86.h | 62 ++++++
lib/hash/rte_hash_crc.h | 396 +------------------------------------
3 files changed, 483 insertions(+), 394 deletions(-)
create mode 100644 lib/hash/hash_crc_sw.h
create mode 100644 lib/hash/hash_crc_x86.h
diff --git a/lib/hash/hash_crc_sw.h b/lib/hash/hash_crc_sw.h
new file mode 100644
index 0000000000..4790a0970b
--- /dev/null
+++ b/lib/hash/hash_crc_sw.h
@@ -0,0 +1,419 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _HASH_CRC_SW_H_
+#define _HASH_CRC_SW_H_
+
+/* Lookup tables for software implementation of CRC32C */
+static const uint32_t crc32c_tables[8][256] = {
+ {0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C,
+ 0x26A1E7E8, 0xD4CA64EB, 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B,
+ 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24, 0x105EC76F, 0xE235446C,
+ 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
+ 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC,
+ 0xBC267848, 0x4E4DFB4B, 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A,
+ 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35, 0xAA64D611, 0x580F5512,
+ 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
+ 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD,
+ 0x1642AE59, 0xE4292D5A, 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A,
+ 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595, 0x417B1DBC, 0xB3109EBF,
+ 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
+ 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F,
+ 0xED03A29B, 0x1F682198, 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927,
+ 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38, 0xDBFC821C, 0x2997011F,
+ 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
+ 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E,
+ 0x4767748A, 0xB50CF789, 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859,
+ 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46, 0x7198540D, 0x83F3D70E,
+ 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
+ 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE,
+ 0xDDE0EB2A, 0x2F8B6829, 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C,
+ 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93, 0x082F63B7, 0xFA44E0B4,
+ 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
+ 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B,
+ 0xB4091BFF, 0x466298FC, 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C,
+ 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033, 0xA24BB5A6, 0x502036A5,
+ 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
+ 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975,
+ 0x0E330A81, 0xFC588982, 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D,
+ 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622, 0x38CC2A06, 0xCAA7A905,
+ 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
+ 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8,
+ 0xE52CC12C, 0x1747422F, 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF,
+ 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0, 0xD3D3E1AB, 0x21B862A8,
+ 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
+ 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78,
+ 0x7FAB5E8C, 0x8DC0DD8F, 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE,
+ 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1, 0x69E9F0D5, 0x9B8273D6,
+ 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
+ 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69,
+ 0xD5CF889D, 0x27A40B9E, 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E,
+ 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351},
+ {0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB,
+ 0x69CF5132, 0x7A6DC945, 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21,
+ 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD, 0x3FC5F181, 0x2C6769F6,
+ 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
+ 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92,
+ 0xCB1E630B, 0xD8BCFB7C, 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B,
+ 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47, 0xE29F20BA, 0xF13DB8CD,
+ 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
+ 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28,
+ 0x298143B1, 0x3A23DBC6, 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2,
+ 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E, 0xFF17C604, 0xECB55E73,
+ 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
+ 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17,
+ 0x0BCC548E, 0x186ECCF9, 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C,
+ 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0, 0x5DC6F43D, 0x4E646C4A,
+ 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
+ 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD,
+ 0xE9537434, 0xFAF1EC43, 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27,
+ 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB, 0xBF59D487, 0xACFB4CF0,
+ 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
+ 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94,
+ 0x4B82460D, 0x5820DE7A, 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260,
+ 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC, 0x66D73941, 0x7575A136,
+ 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
+ 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3,
+ 0xADC95A4A, 0xBE6BC23D, 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059,
+ 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185, 0x844819FB, 0x97EA818C,
+ 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
+ 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8,
+ 0x70938B71, 0x63311306, 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3,
+ 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F, 0x26992BC2, 0x353BB3B5,
+ 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
+ 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556,
+ 0x6D1B6DCF, 0x7EB9F5B8, 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC,
+ 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600, 0x3B11CD7C, 0x28B3550B,
+ 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
+ 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F,
+ 0xCFCA5FF6, 0xDC68C781, 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766,
+ 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA, 0xE64B1C47, 0xF5E98430,
+ 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
+ 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5,
+ 0x2D557F4C, 0x3EF7E73B, 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F,
+ 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483},
+ {0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664,
+ 0xD1B1F617, 0x74F06469, 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6,
+ 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC, 0x70A27D8A, 0xD5E3EFF4,
+ 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
+ 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B,
+ 0x9942B558, 0x3C032726, 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67,
+ 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D, 0xD915C5D1, 0x7C5457AF,
+ 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
+ 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA,
+ 0x40577089, 0xE516E2F7, 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828,
+ 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32, 0xC76580D9, 0x622412A7,
+ 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
+ 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878,
+ 0x2E85480B, 0x8BC4DA75, 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20,
+ 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A, 0x8F96C396, 0x2AD751E8,
+ 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
+ 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9,
+ 0xF7908DDA, 0x52D11FA4, 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B,
+ 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161, 0x56830647, 0xF3C29439,
+ 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
+ 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6,
+ 0xBF63CE95, 0x1A225CEB, 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730,
+ 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A, 0xB3764986, 0x1637DBF8,
+ 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
+ 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD,
+ 0x2A34FCDE, 0x8F756EA0, 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F,
+ 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065, 0x6A638C57, 0xCF221E29,
+ 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
+ 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6,
+ 0x83834485, 0x26C2D6FB, 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE,
+ 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4, 0x2290CF18, 0x87D15D66,
+ 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
+ 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE,
+ 0x9DF3018D, 0x38B293F3, 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C,
+ 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36, 0x3CE08A10, 0x99A1186E,
+ 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
+ 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1,
+ 0xD50042C2, 0x7041D0BC, 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD,
+ 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7, 0x9557324B, 0x3016A035,
+ 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
+ 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760,
+ 0x0C158713, 0xA954156D, 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2,
+ 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8},
+ {0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B,
+ 0xC4451272, 0x1900B8CA, 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF,
+ 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C, 0xE964B13D, 0x34211B85,
+ 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
+ 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990,
+ 0xDB65C0A9, 0x06206A11, 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2,
+ 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41, 0x2161776D, 0xFC24DDD5,
+ 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
+ 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD,
+ 0xFA04B7C4, 0x27411D7C, 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69,
+ 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A, 0xABA65FE7, 0x76E3F55F,
+ 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
+ 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A,
+ 0x99A72E73, 0x44E284CB, 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3,
+ 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610, 0xB4868D3C, 0x69C32784,
+ 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
+ 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027,
+ 0xB8C6591E, 0x6583F3A6, 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3,
+ 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040, 0x95E7FA51, 0x48A250E9,
+ 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
+ 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC,
+ 0xA7E68BC5, 0x7AA3217D, 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006,
+ 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5, 0xA4E4AAD9, 0x79A10061,
+ 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
+ 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349,
+ 0x7F816A70, 0xA2C4C0C8, 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD,
+ 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E, 0x8585DDB4, 0x58C0770C,
+ 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
+ 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519,
+ 0xB784AC20, 0x6AC10698, 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0,
+ 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443, 0x9AA50F6F, 0x47E0A5D7,
+ 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
+ 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93,
+ 0x3D4384AA, 0xE0062E12, 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07,
+ 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4, 0x106227E5, 0xCD278D5D,
+ 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
+ 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48,
+ 0x22635671, 0xFF26FCC9, 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A,
+ 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99, 0xD867E1B5, 0x05224B0D,
+ 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
+ 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825,
+ 0x0302211C, 0xDE478BA4, 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1,
+ 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842},
+ {0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C,
+ 0x906761E8, 0xA8760E44, 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65,
+ 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5, 0x8F2261D3, 0xB7330E7F,
+ 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
+ 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E,
+ 0xDA220BAA, 0xE2336406, 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3,
+ 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13, 0xDECFBEC6, 0xE6DED16A,
+ 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
+ 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598,
+ 0x04EDB56C, 0x3CFCDAC0, 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1,
+ 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151, 0x37516AAE, 0x0F400502,
+ 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
+ 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023,
+ 0x625100D7, 0x5A406F7B, 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89,
+ 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539, 0x7D1400EC, 0x45056F40,
+ 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
+ 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5,
+ 0xBC9EBE11, 0x848FD1BD, 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C,
+ 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C, 0xA3DBBE2A, 0x9BCAD186,
+ 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
+ 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7,
+ 0xF6DBD453, 0xCECABBFF, 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8,
+ 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18, 0xABC5DECD, 0x93D4B161,
+ 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
+ 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593,
+ 0x71E7D567, 0x49F6BACB, 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA,
+ 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A, 0x750A600B, 0x4D1B0FA7,
+ 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
+ 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86,
+ 0x200A0A72, 0x181B65DE, 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C,
+ 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C, 0x3F4F0A49, 0x075E65E5,
+ 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
+ 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE,
+ 0xC994DE1A, 0xF185B1B6, 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497,
+ 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27, 0xD6D1DE21, 0xEEC0B18D,
+ 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
+ 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC,
+ 0x83D1B458, 0xBBC0DBF4, 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51,
+ 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1, 0x873C0134, 0xBF2D6E98,
+ 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
+ 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A,
+ 0x5D1E0A9E, 0x650F6532, 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013,
+ 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3},
+ {0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E,
+ 0x697997B4, 0x8649FCAD, 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5,
+ 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2, 0xC00C303E, 0x2F3C5B27,
+ 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
+ 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F,
+ 0xC973BF95, 0x2643D48C, 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57,
+ 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20, 0xE5F20E92, 0x0AC2658B,
+ 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
+ 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD,
+ 0x2C81B107, 0xC3B1DA1E, 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576,
+ 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201, 0x0E045BEB, 0xE13430F2,
+ 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
+ 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A,
+ 0x077BD440, 0xE84BBF59, 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F,
+ 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778, 0xAE0E73CA, 0x413E18D3,
+ 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
+ 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108,
+ 0xE289DAD2, 0x0DB9B1CB, 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3,
+ 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4, 0x4BFC7D58, 0xA4CC1641,
+ 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
+ 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929,
+ 0x4283F2F3, 0xADB399EA, 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C,
+ 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B, 0x7C0EAFC9, 0x933EC4D0,
+ 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
+ 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86,
+ 0xB57D105C, 0x5A4D7B45, 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D,
+ 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A, 0x99FCA15B, 0x76CCCA42,
+ 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
+ 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A,
+ 0x90832EF0, 0x7FB345E9, 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF,
+ 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8, 0x39F6897A, 0xD6C6E263,
+ 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
+ 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053,
+ 0x7B757B89, 0x94451090, 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8,
+ 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F, 0xD200DC03, 0x3D30B71A,
+ 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
+ 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872,
+ 0xDB7F53A8, 0x344F38B1, 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A,
+ 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D, 0xF7FEE2AF, 0x18CE89B6,
+ 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
+ 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0,
+ 0x3E8D5D3A, 0xD1BD3623, 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B,
+ 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C},
+ {0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919,
+ 0x75E69C41, 0x1DE5B089, 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B,
+ 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA, 0x9C5BFAA6, 0xF458D66E,
+ 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
+ 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC,
+ 0xA7909BB4, 0xCF93B77C, 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5,
+ 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334, 0x73767EEE, 0x1B755226,
+ 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
+ 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002,
+ 0xD4E6E55A, 0xBCE5C992, 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110,
+ 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1, 0x7AB7077A, 0x12B42BB2,
+ 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
+ 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330,
+ 0x417C6668, 0x297F4AA0, 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884,
+ 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55, 0xA8C1008F, 0xC0C22C47,
+ 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
+ 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE,
+ 0x320A1886, 0x5A09344E, 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC,
+ 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D, 0xDBB77E61, 0xB3B452A9,
+ 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
+ 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B,
+ 0xE07C1F73, 0x887F33BB, 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC,
+ 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D, 0xBB43F3A7, 0xD340DF6F,
+ 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
+ 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B,
+ 0x1CD36813, 0x74D044DB, 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59,
+ 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988, 0xC8358D49, 0xA036A181,
+ 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
+ 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903,
+ 0xF3FEEC5B, 0x9BFDC093, 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7,
+ 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766, 0x1A438ABC, 0x7240A674,
+ 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
+ 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097,
+ 0xFA3F95CF, 0x923CB907, 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185,
+ 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454, 0x1382F328, 0x7B81DFE0,
+ 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
+ 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762,
+ 0x2849923A, 0x404ABEF2, 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B,
+ 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA, 0xFCAF7760, 0x94AC5BA8,
+ 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
+ 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C,
+ 0x5B3FECD4, 0x333CC01C, 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E,
+ 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F},
+ {0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A,
+ 0xB3657823, 0xFA590504, 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3,
+ 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE, 0x847609B4, 0xCD4A7493,
+ 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
+ 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224,
+ 0x7528754D, 0x3C14086A, 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0,
+ 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D, 0x4F3B6143, 0x06071C64,
+ 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
+ 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367,
+ 0x3A13140E, 0x732F6929, 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E,
+ 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3, 0x1A00CB32, 0x533CB615,
+ 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
+ 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2,
+ 0xEB5EB7CB, 0xA262CAEC, 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF,
+ 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782, 0xDC4DC65C, 0x9571BB7B,
+ 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
+ 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1,
+ 0xA465D688, 0xED59ABAF, 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18,
+ 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75, 0x9376A71F, 0xDA4ADA38,
+ 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
+ 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F,
+ 0x6228DBE6, 0x2B14A6C1, 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D,
+ 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360, 0x763A92BE, 0x3F06EF99,
+ 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
+ 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A,
+ 0x0312E7F3, 0x4A2E9AD4, 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63,
+ 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E, 0x3901F3FD, 0x703D8EDA,
+ 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
+ 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D,
+ 0xC85F8F04, 0x8163F223, 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20,
+ 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D, 0xFF4CFE93, 0xB67083B4,
+ 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
+ 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C,
+ 0x9D642575, 0xD4585852, 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5,
+ 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88, 0xAA7754E2, 0xE34B29C5,
+ 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
+ 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72,
+ 0x5B29281B, 0x1215553C, 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6,
+ 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB, 0x613A3C15, 0x28064132,
+ 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
+ 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31,
+ 0x14124958, 0x5D2E347F, 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8,
+ 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5},
+};
+
+#define CRC32_UPD(crc, n) \
+ (crc32c_tables[(n)][(crc)&0xFF] ^ \
+ crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
+
+static inline uint32_t
+crc32c_1byte(uint8_t data, uint32_t init_val)
+{
+ uint32_t crc;
+ crc = init_val;
+ crc ^= data;
+
+ return crc32c_tables[0][crc & 0xff] ^ (crc >> 8);
+}
+
+static inline uint32_t
+crc32c_2bytes(uint16_t data, uint32_t init_val)
+{
+ uint32_t crc;
+ crc = init_val;
+ crc ^= data;
+
+ crc = CRC32_UPD(crc, 1) ^ (crc >> 16);
+
+ return crc;
+}
+
+static inline uint32_t
+crc32c_1word(uint32_t data, uint32_t init_val)
+{
+ uint32_t crc, term1, term2;
+ crc = init_val;
+ crc ^= data;
+
+ term1 = CRC32_UPD(crc, 3);
+ term2 = crc >> 16;
+ crc = term1 ^ CRC32_UPD(term2, 1);
+
+ return crc;
+}
+
+static inline uint32_t
+crc32c_2words(uint64_t data, uint32_t init_val)
+{
+ uint32_t crc, term1, term2;
+ union {
+ uint64_t u64;
+ uint32_t u32[2];
+ } d;
+ d.u64 = data;
+
+ crc = init_val;
+ crc ^= d.u32[0];
+
+ term1 = CRC32_UPD(crc, 7);
+ term2 = crc >> 16;
+ crc = term1 ^ CRC32_UPD(term2, 5);
+ term1 = CRC32_UPD(d.u32[1], 3);
+ term2 = d.u32[1] >> 16;
+ crc ^= term1 ^ CRC32_UPD(term2, 1);
+
+ return crc;
+}
+
+#endif /* HASH_CRC_SW */
diff --git a/lib/hash/hash_crc_x86.h b/lib/hash/hash_crc_x86.h
new file mode 100644
index 0000000000..b80a742afa
--- /dev/null
+++ b/lib/hash/hash_crc_x86.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _HASH_CRC_X86_H_
+#define _HASH_CRC_X86_H_
+
+static inline uint32_t
+crc32c_sse42_u8(uint8_t data, uint32_t init_val)
+{
+ __asm__ volatile(
+ "crc32b %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u16(uint16_t data, uint32_t init_val)
+{
+ __asm__ volatile(
+ "crc32w %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u32(uint32_t data, uint32_t init_val)
+{
+ __asm__ volatile(
+ "crc32l %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
+{
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } d;
+
+ d.u64 = data;
+ init_val = crc32c_sse42_u32(d.u32[0], (uint32_t)init_val);
+ init_val = crc32c_sse42_u32(d.u32[1], (uint32_t)init_val);
+ return (uint32_t)init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64(uint64_t data, uint64_t init_val)
+{
+ __asm__ volatile(
+ "crc32q %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return (uint32_t)init_val;
+}
+
+#endif
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 3e131aa6bb..1cc8f84fe2 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -21,400 +21,7 @@ extern "C" {
#include <rte_branch_prediction.h>
#include <rte_common.h>
-/* Lookup tables for software implementation of CRC32C */
-static const uint32_t crc32c_tables[8][256] = {{
- 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB,
- 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24,
- 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
- 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B,
- 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35,
- 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
- 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A,
- 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595,
- 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
- 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198,
- 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38,
- 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
- 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789,
- 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46,
- 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
- 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829,
- 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93,
- 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
- 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC,
- 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033,
- 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
- 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982,
- 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622,
- 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
- 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F,
- 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0,
- 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
- 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F,
- 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1,
- 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
- 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E,
- 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351
-},
-{
- 0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB, 0x69CF5132, 0x7A6DC945,
- 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21, 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD,
- 0x3FC5F181, 0x2C6769F6, 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
- 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92, 0xCB1E630B, 0xD8BCFB7C,
- 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B, 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47,
- 0xE29F20BA, 0xF13DB8CD, 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
- 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28, 0x298143B1, 0x3A23DBC6,
- 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2, 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E,
- 0xFF17C604, 0xECB55E73, 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
- 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17, 0x0BCC548E, 0x186ECCF9,
- 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C, 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0,
- 0x5DC6F43D, 0x4E646C4A, 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
- 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD, 0xE9537434, 0xFAF1EC43,
- 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27, 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB,
- 0xBF59D487, 0xACFB4CF0, 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
- 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94, 0x4B82460D, 0x5820DE7A,
- 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260, 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC,
- 0x66D73941, 0x7575A136, 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
- 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3, 0xADC95A4A, 0xBE6BC23D,
- 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059, 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185,
- 0x844819FB, 0x97EA818C, 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
- 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8, 0x70938B71, 0x63311306,
- 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3, 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F,
- 0x26992BC2, 0x353BB3B5, 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
- 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556, 0x6D1B6DCF, 0x7EB9F5B8,
- 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC, 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600,
- 0x3B11CD7C, 0x28B3550B, 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
- 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F, 0xCFCA5FF6, 0xDC68C781,
- 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766, 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA,
- 0xE64B1C47, 0xF5E98430, 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
- 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5, 0x2D557F4C, 0x3EF7E73B,
- 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F, 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483
-},
-{
- 0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664, 0xD1B1F617, 0x74F06469,
- 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6, 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC,
- 0x70A27D8A, 0xD5E3EFF4, 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
- 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B, 0x9942B558, 0x3C032726,
- 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67, 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D,
- 0xD915C5D1, 0x7C5457AF, 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
- 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA, 0x40577089, 0xE516E2F7,
- 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828, 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32,
- 0xC76580D9, 0x622412A7, 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
- 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878, 0x2E85480B, 0x8BC4DA75,
- 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20, 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A,
- 0x8F96C396, 0x2AD751E8, 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
- 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9, 0xF7908DDA, 0x52D11FA4,
- 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B, 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161,
- 0x56830647, 0xF3C29439, 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
- 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6, 0xBF63CE95, 0x1A225CEB,
- 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730, 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A,
- 0xB3764986, 0x1637DBF8, 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
- 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD, 0x2A34FCDE, 0x8F756EA0,
- 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F, 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065,
- 0x6A638C57, 0xCF221E29, 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
- 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6, 0x83834485, 0x26C2D6FB,
- 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE, 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4,
- 0x2290CF18, 0x87D15D66, 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
- 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE, 0x9DF3018D, 0x38B293F3,
- 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C, 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36,
- 0x3CE08A10, 0x99A1186E, 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
- 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1, 0xD50042C2, 0x7041D0BC,
- 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD, 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7,
- 0x9557324B, 0x3016A035, 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
- 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760, 0x0C158713, 0xA954156D,
- 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2, 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8
-},
-{
- 0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B, 0xC4451272, 0x1900B8CA,
- 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF, 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C,
- 0xE964B13D, 0x34211B85, 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
- 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990, 0xDB65C0A9, 0x06206A11,
- 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2, 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41,
- 0x2161776D, 0xFC24DDD5, 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
- 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD, 0xFA04B7C4, 0x27411D7C,
- 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69, 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A,
- 0xABA65FE7, 0x76E3F55F, 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
- 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A, 0x99A72E73, 0x44E284CB,
- 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3, 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610,
- 0xB4868D3C, 0x69C32784, 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
- 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027, 0xB8C6591E, 0x6583F3A6,
- 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3, 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040,
- 0x95E7FA51, 0x48A250E9, 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
- 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC, 0xA7E68BC5, 0x7AA3217D,
- 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006, 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5,
- 0xA4E4AAD9, 0x79A10061, 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
- 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349, 0x7F816A70, 0xA2C4C0C8,
- 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD, 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E,
- 0x8585DDB4, 0x58C0770C, 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
- 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519, 0xB784AC20, 0x6AC10698,
- 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0, 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443,
- 0x9AA50F6F, 0x47E0A5D7, 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
- 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93, 0x3D4384AA, 0xE0062E12,
- 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07, 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4,
- 0x106227E5, 0xCD278D5D, 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
- 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48, 0x22635671, 0xFF26FCC9,
- 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A, 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99,
- 0xD867E1B5, 0x05224B0D, 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
- 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825, 0x0302211C, 0xDE478BA4,
- 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1, 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842
-},
-{
- 0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C, 0x906761E8, 0xA8760E44,
- 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65, 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5,
- 0x8F2261D3, 0xB7330E7F, 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
- 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E, 0xDA220BAA, 0xE2336406,
- 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3, 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13,
- 0xDECFBEC6, 0xE6DED16A, 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
- 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598, 0x04EDB56C, 0x3CFCDAC0,
- 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1, 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151,
- 0x37516AAE, 0x0F400502, 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
- 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023, 0x625100D7, 0x5A406F7B,
- 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89, 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539,
- 0x7D1400EC, 0x45056F40, 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
- 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5, 0xBC9EBE11, 0x848FD1BD,
- 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C, 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C,
- 0xA3DBBE2A, 0x9BCAD186, 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
- 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7, 0xF6DBD453, 0xCECABBFF,
- 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8, 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18,
- 0xABC5DECD, 0x93D4B161, 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
- 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593, 0x71E7D567, 0x49F6BACB,
- 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA, 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A,
- 0x750A600B, 0x4D1B0FA7, 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
- 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86, 0x200A0A72, 0x181B65DE,
- 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C, 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C,
- 0x3F4F0A49, 0x075E65E5, 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
- 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE, 0xC994DE1A, 0xF185B1B6,
- 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497, 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27,
- 0xD6D1DE21, 0xEEC0B18D, 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
- 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC, 0x83D1B458, 0xBBC0DBF4,
- 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51, 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1,
- 0x873C0134, 0xBF2D6E98, 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
- 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A, 0x5D1E0A9E, 0x650F6532,
- 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013, 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3
-},
-{
- 0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E, 0x697997B4, 0x8649FCAD,
- 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5, 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2,
- 0xC00C303E, 0x2F3C5B27, 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
- 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F, 0xC973BF95, 0x2643D48C,
- 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57, 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20,
- 0xE5F20E92, 0x0AC2658B, 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
- 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD, 0x2C81B107, 0xC3B1DA1E,
- 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576, 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201,
- 0x0E045BEB, 0xE13430F2, 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
- 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A, 0x077BD440, 0xE84BBF59,
- 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F, 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778,
- 0xAE0E73CA, 0x413E18D3, 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
- 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108, 0xE289DAD2, 0x0DB9B1CB,
- 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3, 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4,
- 0x4BFC7D58, 0xA4CC1641, 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
- 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929, 0x4283F2F3, 0xADB399EA,
- 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C, 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B,
- 0x7C0EAFC9, 0x933EC4D0, 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
- 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86, 0xB57D105C, 0x5A4D7B45,
- 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D, 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A,
- 0x99FCA15B, 0x76CCCA42, 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
- 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A, 0x90832EF0, 0x7FB345E9,
- 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF, 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8,
- 0x39F6897A, 0xD6C6E263, 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
- 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053, 0x7B757B89, 0x94451090,
- 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8, 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F,
- 0xD200DC03, 0x3D30B71A, 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
- 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872, 0xDB7F53A8, 0x344F38B1,
- 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A, 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D,
- 0xF7FEE2AF, 0x18CE89B6, 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
- 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0, 0x3E8D5D3A, 0xD1BD3623,
- 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B, 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C
-},
-{
- 0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919, 0x75E69C41, 0x1DE5B089,
- 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B, 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA,
- 0x9C5BFAA6, 0xF458D66E, 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
- 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC, 0xA7909BB4, 0xCF93B77C,
- 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5, 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334,
- 0x73767EEE, 0x1B755226, 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
- 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002, 0xD4E6E55A, 0xBCE5C992,
- 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110, 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1,
- 0x7AB7077A, 0x12B42BB2, 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
- 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330, 0x417C6668, 0x297F4AA0,
- 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884, 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55,
- 0xA8C1008F, 0xC0C22C47, 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
- 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE, 0x320A1886, 0x5A09344E,
- 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC, 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D,
- 0xDBB77E61, 0xB3B452A9, 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
- 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B, 0xE07C1F73, 0x887F33BB,
- 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC, 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D,
- 0xBB43F3A7, 0xD340DF6F, 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
- 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B, 0x1CD36813, 0x74D044DB,
- 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59, 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988,
- 0xC8358D49, 0xA036A181, 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
- 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903, 0xF3FEEC5B, 0x9BFDC093,
- 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7, 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766,
- 0x1A438ABC, 0x7240A674, 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
- 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097, 0xFA3F95CF, 0x923CB907,
- 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185, 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454,
- 0x1382F328, 0x7B81DFE0, 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
- 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762, 0x2849923A, 0x404ABEF2,
- 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B, 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA,
- 0xFCAF7760, 0x94AC5BA8, 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
- 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C, 0x5B3FECD4, 0x333CC01C,
- 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E, 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F
-},
-{
- 0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A, 0xB3657823, 0xFA590504,
- 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3, 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE,
- 0x847609B4, 0xCD4A7493, 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
- 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224, 0x7528754D, 0x3C14086A,
- 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0, 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D,
- 0x4F3B6143, 0x06071C64, 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
- 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367, 0x3A13140E, 0x732F6929,
- 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E, 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3,
- 0x1A00CB32, 0x533CB615, 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
- 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2, 0xEB5EB7CB, 0xA262CAEC,
- 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF, 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782,
- 0xDC4DC65C, 0x9571BB7B, 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
- 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1, 0xA465D688, 0xED59ABAF,
- 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18, 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75,
- 0x9376A71F, 0xDA4ADA38, 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
- 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F, 0x6228DBE6, 0x2B14A6C1,
- 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D, 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360,
- 0x763A92BE, 0x3F06EF99, 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
- 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A, 0x0312E7F3, 0x4A2E9AD4,
- 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63, 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E,
- 0x3901F3FD, 0x703D8EDA, 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
- 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D, 0xC85F8F04, 0x8163F223,
- 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20, 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D,
- 0xFF4CFE93, 0xB67083B4, 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
- 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C, 0x9D642575, 0xD4585852,
- 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5, 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88,
- 0xAA7754E2, 0xE34B29C5, 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
- 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72, 0x5B29281B, 0x1215553C,
- 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6, 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB,
- 0x613A3C15, 0x28064132, 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
- 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31, 0x14124958, 0x5D2E347F,
- 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8, 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5
-}};
-
-#define CRC32_UPD(crc, n) \
- (crc32c_tables[(n)][(crc) & 0xFF] ^ \
- crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
-
-static inline uint32_t
-crc32c_1byte(uint8_t data, uint32_t init_val)
-{
- uint32_t crc;
- crc = init_val;
- crc ^= data;
-
- return crc32c_tables[0][crc & 0xff] ^ (crc >> 8);
-}
-
-static inline uint32_t
-crc32c_2bytes(uint16_t data, uint32_t init_val)
-{
- uint32_t crc;
- crc = init_val;
- crc ^= data;
-
- crc = CRC32_UPD(crc, 1) ^ (crc >> 16);
-
- return crc;
-}
-
-static inline uint32_t
-crc32c_1word(uint32_t data, uint32_t init_val)
-{
- uint32_t crc, term1, term2;
- crc = init_val;
- crc ^= data;
-
- term1 = CRC32_UPD(crc, 3);
- term2 = crc >> 16;
- crc = term1 ^ CRC32_UPD(term2, 1);
-
- return crc;
-}
-
-static inline uint32_t
-crc32c_2words(uint64_t data, uint32_t init_val)
-{
- uint32_t crc, term1, term2;
- union {
- uint64_t u64;
- uint32_t u32[2];
- } d;
- d.u64 = data;
-
- crc = init_val;
- crc ^= d.u32[0];
-
- term1 = CRC32_UPD(crc, 7);
- term2 = crc >> 16;
- crc = term1 ^ CRC32_UPD(term2, 5);
- term1 = CRC32_UPD(d.u32[1], 3);
- term2 = d.u32[1] >> 16;
- crc ^= term1 ^ CRC32_UPD(term2, 1);
-
- return crc;
-}
-
-#if defined(RTE_ARCH_X86)
-static inline uint32_t
-crc32c_sse42_u8(uint8_t data, uint32_t init_val)
-{
- __asm__ volatile(
- "crc32b %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u16(uint16_t data, uint32_t init_val)
-{
- __asm__ volatile(
- "crc32w %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u32(uint32_t data, uint32_t init_val)
-{
- __asm__ volatile(
- "crc32l %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
-{
- union {
- uint32_t u32[2];
- uint64_t u64;
- } d;
-
- d.u64 = data;
- init_val = crc32c_sse42_u32(d.u32[0], (uint32_t)init_val);
- init_val = crc32c_sse42_u32(d.u32[1], (uint32_t)init_val);
- return (uint32_t)init_val;
-}
-#endif
-
-#ifdef RTE_ARCH_X86_64
-static inline uint32_t
-crc32c_sse42_u64(uint64_t data, uint64_t init_val)
-{
- __asm__ volatile(
- "crc32q %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return (uint32_t)init_val;
-}
-#endif
+#include <hash_crc_sw.h>
#define CRC32_SW (1U << 0)
#define CRC32_SSE42 (1U << 1)
@@ -427,6 +34,7 @@ static uint8_t crc32_alg = CRC32_SW;
#if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
#include "rte_crc_arm64.h"
#else
+#include "hash_crc_x86.h"
/**
* Allow or disallow use of SSE4.2 instrinsics for CRC32 hash
--
2.17.1
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure
2021-10-03 20:58 0% ` Ananyev, Konstantin
@ 2021-10-03 21:10 0% ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2021-10-03 21:10 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
Nithin Kumar Dabilpuram, Ankur Dwivedi, shepard.siegel, ed.czeck,
john.miller, Igor Russkikh, ajit.khaparde, somnath.kotur,
rahul.lakkireddy, hemant.agrawal, sachin.saxena, Wang, Haiyue,
Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
Kiran Kumar Kokkilagadda, andrew.rybchenko, Maciej Czekaj [C],
jiawenwu, jianwang, maxime.coquelin, Xia, Chenbo, thomas, Yigit,
Ferruh, mdr, Jayatheerthan, Jay
>
>> >Copy public function pointers (rx_pkt_burst(), etc.) and related
>> >pointers to internal data from rte_eth_dev structure into a
>> >separate flat array. That array will remain in a public header.
>> >The intention here is to make rte_eth_dev and related structures
>> >internal.
>> >That should allow future possible changes to core eth_dev
>structures
>> >to be transparent to the user and help to avoid ABI/API breakages.
>> >The plan is to keep minimal part of data from rte_eth_dev public,
>> >so we still can use inline functions for 'fast' calls
>> >(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>> >
>> >Signed-off-by: Konstantin Ananyev
><konstantin.ananyev@intel.com>
>> >---
>> > lib/ethdev/ethdev_private.c | 52
>> >++++++++++++++++++++++++++++++++++++
>> > lib/ethdev/ethdev_private.h | 7 +++++
>> > lib/ethdev/rte_ethdev.c | 17 ++++++++++++
>> > lib/ethdev/rte_ethdev_core.h | 45
>> >+++++++++++++++++++++++++++++++
>> > 4 files changed, 121 insertions(+)
>> >
>>
>> <snip>
>>
>> >+void
>> >+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
>> >+{
>> >+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
>> >+ static const struct rte_eth_fp_ops dummy_ops = {
>> >+ .rx_pkt_burst = dummy_eth_rx_burst,
>> >+ .tx_pkt_burst = dummy_eth_tx_burst,
>> >+ .rxq = {.data = dummy_data, .clbk = dummy_data,},
>> >+ .txq = {.data = dummy_data, .clbk = dummy_data,},
>> >+ };
>> >+
>> >+ *fpo = dummy_ops;
>> >+}
>> >+
>> >+void
>> >+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> >+ const struct rte_eth_dev *dev)
>> >+{
>> >+ fpo->rx_pkt_burst = dev->rx_pkt_burst;
>> >+ fpo->tx_pkt_burst = dev->tx_pkt_burst;
>> >+ fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
>> >+ fpo->rx_queue_count = dev->rx_queue_count;
>> >+ fpo->rx_descriptor_status = dev->rx_descriptor_status;
>> >+ fpo->tx_descriptor_status = dev->tx_descriptor_status;
>> >+
>> >+ fpo->rxq.data = dev->data->rx_queues;
>> >+ fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
>> >+
>> >+ fpo->txq.data = dev->data->tx_queues;
>> >+ fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>> >+}
>> >diff --git a/lib/ethdev/ethdev_private.h
>b/lib/ethdev/ethdev_private.h
>> >index 3724429577..40333e7651 100644
>> >--- a/lib/ethdev/ethdev_private.h
>> >+++ b/lib/ethdev/ethdev_private.h
>> >@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev
>> >*_start, rte_eth_cmp_t cmp,
>> > /* Parse devargs value for representor parameter. */
>> > int rte_eth_devargs_parse_representor_ports(char *str, void
>*data);
>> >
>> >+/* reset eth 'fast' API to dummy values */
>> >+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>> >+
>> >+/* setup eth 'fast' API to ethdev values */
>> >+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> >+ const struct rte_eth_dev *dev);
>> >+
>> > #endif /* _ETH_PRIVATE_H_ */
>> >diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> >index 424bc260fa..9fbb1bc3db 100644
>> >--- a/lib/ethdev/rte_ethdev.c
>> >+++ b/lib/ethdev/rte_ethdev.c
>> >@@ -44,6 +44,9 @@
>> > static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>> > struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>> >
>> >+/* public 'fast' API */
>> >+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> >+
>> > /* spinlock for eth device callbacks */
>> > static rte_spinlock_t eth_dev_cb_lock =
>RTE_SPINLOCK_INITIALIZER;
>> >
>> >@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
>> > (*dev->dev_ops->link_update)(dev, 0);
>> > }
>> >
>> >+ /* expose selection of PMD rx/tx function */
>> >+ eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
>> >+
>>
>> Secondary process will not set these properly I believe as it might not
>> call start() if it does primary process ops will not be set.
>
>That's a very good point, have to admit - I missed that part.
>
>>
>> One simple solution is to call ops_setup() around
>rte_eth_dev_attach_secondary()
>> but if application doesn't invoke start() on Primary the ops will not be
>set for it.
>
>I think rte_eth_dev_attach_secondary() wouldn't work, as majority of
>the PMDs setup
>fast ops function pointers after it.
>From reading the code rte_eth_dev_probing_finish() seems like a good
>choice -
>as it is always the final point in device initialization for secondary
>process.
Ack, make sense to me, I did a similar thing for event device in
http://patches.dpdk.org/project/dpdk/patch/20211003082710.8398-4-pbhagavatula@marvell.com/
>
>BTW, we also need something similar at de-init phase.
>rte_eth_dev_release_port() seems like a good candidate for it.
>
Hindsight I should have added reset to rte_event_pmd_pci_remove(), I will add it in next version.
>
>>
>> > rte_ethdev_trace_start(port_id);
>> > return 0;
>> > }
>> >@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
>> > return 0;
>> > }
>> >
>> >+ /* point rx/tx functions to dummy ones */
>> >+ eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
>> >+
>> > dev->data->dev_started = 0;
>> > ret = (*dev->dev_ops->dev_stop)(dev);
>> > rte_ethdev_trace_stop(port_id, ret);
>> >2.26.3
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 1/3] security: add SA config option for inner pkt csum
@ 2021-10-03 21:09 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-03 21:09 UTC (permalink / raw)
To: Archana Muniganti, gakhil, Nicolau, Radu, Zhang, Roy Fan, hemant.agrawal
Cc: anoobj, ktejasree, adwivedi, jerinj, dev
>
> Add inner packet IPv4 hdr and L4 checksum enable options
> in conf. These will be used in case of protocol offload.
> Per SA, application could specify whether the
> checksum(compute/verify) can be offloaded to security device.
>
> Signed-off-by: Archana Muniganti <marchana@marvell.com>
> ---
> doc/guides/cryptodevs/features/default.ini | 1 +
> doc/guides/rel_notes/deprecation.rst | 4 +--
> doc/guides/rel_notes/release_21_11.rst | 4 +++
> lib/cryptodev/rte_cryptodev.h | 2 ++
> lib/security/rte_security.h | 31 ++++++++++++++++++++++
> 5 files changed, 40 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
> index c24814de98..96d95ddc81 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -33,6 +33,7 @@ Non-Byte aligned data =
> Sym raw data path API =
> Cipher multiple data units =
> Cipher wrapped key =
> +Inner checksum =
>
> ;
> ; Supported crypto algorithms of a default crypto driver.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 05fc2fdee7..8308e00ed4 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -232,8 +232,8 @@ Deprecation Notices
> IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
>
> * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
> - will be updated with new fields to support new features like IPsec inner
> - checksum, TSO in case of protocol offload.
> + will be updated with new fields to support new features like TSO in case of
> + protocol offload.
>
> * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
> ``hdr_l3_len`` to configure tunnel L3 header length.
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 3ade7fe5ac..5480f05a99 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -196,6 +196,10 @@ ABI Changes
> ``rte_security_ipsec_xform`` to allow applications to configure SA soft
> and hard expiry limits. Limits can be either in number of packets or bytes.
>
> +* security: The new options ``ip_csum_enable`` and ``l4_csum_enable`` were added
> + in structure ``rte_security_ipsec_sa_options`` to indicate whether inner
> + packet IPv4 header checksum and L4 checksum need to be offloaded to
> + security device.
>
> Known Issues
> ------------
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index bb01f0f195..d9271a6c45 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
> /**< Support operations on multiple data-units message */
> #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY (1ULL << 26)
> /**< Support wrapped key in cipher xform */
> +#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM (1ULL << 27)
> +/**< Support inner checksum computation/verification */
>
> /**
> * Get the name of a crypto device feature flag
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index ab1a6e1f65..0c5636377e 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -230,6 +230,37 @@ struct rte_security_ipsec_sa_options {
> * * 0: Do not match UDP ports
> */
> uint32_t udp_ports_verify : 1;
> +
> + /** Compute/verify inner packet IPv4 header checksum in tunnel mode
> + *
> + * * 1: For outbound, compute inner packet IPv4 header checksum
> + * before tunnel encapsulation and for inbound, verify after
> + * tunnel decapsulation.
> + * * 0: Inner packet IP header checksum is not computed/verified.
> + *
> + * The checksum verification status would be set in mbuf using
> + * PKT_RX_IP_CKSUM_xxx flags.
> + *
> + * Inner IP checksum computation can also be enabled(per operation)
> + * by setting the flag PKT_TX_IP_CKSUM in mbuf.
> + */
> + uint32_t ip_csum_enable : 1;
> +
> + /** Compute/verify inner packet L4 checksum in tunnel mode
> + *
> + * * 1: For outbound, compute inner packet L4 checksum before
> + * tunnel encapsulation and for inbound, verify after
> + * tunnel decapsulation.
> + * * 0: Inner packet L4 checksum is not computed/verified.
> + *
> + * The checksum verification status would be set in mbuf using
> + * PKT_RX_L4_CKSUM_xxx flags.
> + *
> + * Inner L4 checksum computation can also be enabled(per operation)
> + * by setting the flags PKT_TX_TCP_CKSUM or PKT_TX_SCTP_CKSUM or
> + * PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
> + */
> + uint32_t l4_csum_enable : 1;
> };
>
> /** IPSec security association direction */
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.22.0
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array
@ 2021-10-03 21:05 3% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-03 21:05 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, dev
Cc: Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
Nithin Kumar Dabilpuram, Ankur Dwivedi, shepard.siegel, ed.czeck,
john.miller, Igor Russkikh, ajit.khaparde, somnath.kotur,
rahul.lakkireddy, hemant.agrawal, sachin.saxena, Wang, Haiyue,
Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
Kiran Kumar Kokkilagadda, andrew.rybchenko, Maciej Czekaj [C],
jiawenwu, jianwang, maxime.coquelin, Xia, Chenbo, thomas, Yigit,
Ferruh, mdr, Jayatheerthan, Jay
>
> >At queue configure stage always allocate space for maximum possible
> >number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> >That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> >pointer to internal queue data without extra checking of current
> >number
> >of configured queues.
> >That would help in future to hide rte_eth_dev and related structures.
> >
> >Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >---
> > lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
> > 1 file changed, 9 insertions(+), 27 deletions(-)
> >
> >diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >index daf5ca9242..424bc260fa 100644
> >--- a/lib/ethdev/rte_ethdev.c
> >+++ b/lib/ethdev/rte_ethdev.c
> >@@ -898,7 +898,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev
> >*dev, uint16_t nb_queues)
> >
> > if (dev->data->rx_queues == NULL && nb_queues != 0) { /*
> >first time configuration */
> > dev->data->rx_queues = rte_zmalloc("ethdev-
> >>rx_queues",
> >- sizeof(dev->data->rx_queues[0]) *
> >nb_queues,
> >+ sizeof(dev->data->rx_queues[0]) *
> >+ RTE_MAX_QUEUES_PER_PORT,
> > RTE_CACHE_LINE_SIZE);
> > if (dev->data->rx_queues == NULL) {
> > dev->data->nb_rx_queues = 0;
>
> We could get rid of this zmalloc by declaring rx_queues as array of
> pointers, it would make code much simpler.
> I believe the original code dates back to "Initial" release.
Yep we can, and yes it will simplify this peace of code.
The main reason I decided no to do this change now -
it will change layout of the_eth_dev_data structure.
In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
as much as possible to avoid any unforeseen performance and functional impacts.
If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
without worrying about ABI breakage.
>
>
> >@@ -909,21 +910,11 @@ eth_dev_rx_queue_config(struct
> >rte_eth_dev *dev, uint16_t nb_queues)
> >
> > rxq = dev->data->rx_queues;
> >
> >- for (i = nb_queues; i < old_nb_queues; i++)
> >+ for (i = nb_queues; i < old_nb_queues; i++) {
> > (*dev->dev_ops->rx_queue_release)(rxq[i]);
> >- rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
> >- RTE_CACHE_LINE_SIZE);
> >- if (rxq == NULL)
> >- return -(ENOMEM);
> >- if (nb_queues > old_nb_queues) {
> >- uint16_t new_qs = nb_queues -
> >old_nb_queues;
> >-
> >- memset(rxq + old_nb_queues, 0,
> >- sizeof(rxq[0]) * new_qs);
> >+ rxq[i] = NULL;
> > }
> >
> >- dev->data->rx_queues = rxq;
> >-
> > } else if (dev->data->rx_queues != NULL && nb_queues == 0) {
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >>rx_queue_release, -ENOTSUP);
> >
> >@@ -1138,8 +1129,9 @@ eth_dev_tx_queue_config(struct
> >rte_eth_dev *dev, uint16_t nb_queues)
> >
> > if (dev->data->tx_queues == NULL && nb_queues != 0) { /*
> >first time configuration */
> > dev->data->tx_queues = rte_zmalloc("ethdev-
> >>tx_queues",
> >- sizeof(dev->data-
> >>tx_queues[0]) * nb_queues,
> >-
> >RTE_CACHE_LINE_SIZE);
> >+ sizeof(dev->data->tx_queues[0]) *
> >+ RTE_MAX_QUEUES_PER_PORT,
> >+ RTE_CACHE_LINE_SIZE);
> > if (dev->data->tx_queues == NULL) {
> > dev->data->nb_tx_queues = 0;
> > return -(ENOMEM);
> >@@ -1149,21 +1141,11 @@ eth_dev_tx_queue_config(struct
> >rte_eth_dev *dev, uint16_t nb_queues)
> >
> > txq = dev->data->tx_queues;
> >
> >- for (i = nb_queues; i < old_nb_queues; i++)
> >+ for (i = nb_queues; i < old_nb_queues; i++) {
> > (*dev->dev_ops->tx_queue_release)(txq[i]);
> >- txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
> >- RTE_CACHE_LINE_SIZE);
> >- if (txq == NULL)
> >- return -ENOMEM;
> >- if (nb_queues > old_nb_queues) {
> >- uint16_t new_qs = nb_queues -
> >old_nb_queues;
> >-
> >- memset(txq + old_nb_queues, 0,
> >- sizeof(txq[0]) * new_qs);
> >+ txq[i] = NULL;
> > }
> >
> >- dev->data->tx_queues = txq;
> >-
> > } else if (dev->data->tx_queues != NULL && nb_queues == 0) {
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >>tx_queue_release, -ENOTSUP);
> >
> >--
> >2.26.3
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure
2021-10-02 0:14 0% [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Pavan Nikhilesh Bhagavatula
@ 2021-10-03 20:58 0% ` Ananyev, Konstantin
2021-10-03 21:10 0% ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-03 20:58 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, dev
Cc: Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
Nithin Kumar Dabilpuram, Ankur Dwivedi, shepard.siegel, ed.czeck,
john.miller, Igor Russkikh, ajit.khaparde, somnath.kotur,
rahul.lakkireddy, hemant.agrawal, sachin.saxena, Wang, Haiyue,
Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
Kiran Kumar Kokkilagadda, andrew.rybchenko, Maciej Czekaj [C],
jiawenwu, jianwang, maxime.coquelin, Xia, Chenbo, thomas, Yigit,
Ferruh, mdr, Jayatheerthan, Jay
> >Copy public function pointers (rx_pkt_burst(), etc.) and related
> >pointers to internal data from rte_eth_dev structure into a
> >separate flat array. That array will remain in a public header.
> >The intention here is to make rte_eth_dev and related structures
> >internal.
> >That should allow future possible changes to core eth_dev structures
> >to be transparent to the user and help to avoid ABI/API breakages.
> >The plan is to keep minimal part of data from rte_eth_dev public,
> >so we still can use inline functions for 'fast' calls
> >(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> >
> >Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >---
> > lib/ethdev/ethdev_private.c | 52
> >++++++++++++++++++++++++++++++++++++
> > lib/ethdev/ethdev_private.h | 7 +++++
> > lib/ethdev/rte_ethdev.c | 17 ++++++++++++
> > lib/ethdev/rte_ethdev_core.h | 45
> >+++++++++++++++++++++++++++++++
> > 4 files changed, 121 insertions(+)
> >
>
> <snip>
>
> >+void
> >+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> >+{
> >+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> >+ static const struct rte_eth_fp_ops dummy_ops = {
> >+ .rx_pkt_burst = dummy_eth_rx_burst,
> >+ .tx_pkt_burst = dummy_eth_tx_burst,
> >+ .rxq = {.data = dummy_data, .clbk = dummy_data,},
> >+ .txq = {.data = dummy_data, .clbk = dummy_data,},
> >+ };
> >+
> >+ *fpo = dummy_ops;
> >+}
> >+
> >+void
> >+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> >+ const struct rte_eth_dev *dev)
> >+{
> >+ fpo->rx_pkt_burst = dev->rx_pkt_burst;
> >+ fpo->tx_pkt_burst = dev->tx_pkt_burst;
> >+ fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> >+ fpo->rx_queue_count = dev->rx_queue_count;
> >+ fpo->rx_descriptor_status = dev->rx_descriptor_status;
> >+ fpo->tx_descriptor_status = dev->tx_descriptor_status;
> >+
> >+ fpo->rxq.data = dev->data->rx_queues;
> >+ fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> >+
> >+ fpo->txq.data = dev->data->tx_queues;
> >+ fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> >+}
> >diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> >index 3724429577..40333e7651 100644
> >--- a/lib/ethdev/ethdev_private.h
> >+++ b/lib/ethdev/ethdev_private.h
> >@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev
> >*_start, rte_eth_cmp_t cmp,
> > /* Parse devargs value for representor parameter. */
> > int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> >
> >+/* reset eth 'fast' API to dummy values */
> >+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> >+
> >+/* setup eth 'fast' API to ethdev values */
> >+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> >+ const struct rte_eth_dev *dev);
> >+
> > #endif /* _ETH_PRIVATE_H_ */
> >diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >index 424bc260fa..9fbb1bc3db 100644
> >--- a/lib/ethdev/rte_ethdev.c
> >+++ b/lib/ethdev/rte_ethdev.c
> >@@ -44,6 +44,9 @@
> > static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> > struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> >
> >+/* public 'fast' API */
> >+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> >+
> > /* spinlock for eth device callbacks */
> > static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> >
> >@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
> > (*dev->dev_ops->link_update)(dev, 0);
> > }
> >
> >+ /* expose selection of PMD rx/tx function */
> >+ eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> >+
>
> Secondary process will not set these properly I believe as it might not
> call start() if it does primary process ops will not be set.
That's a very good point, have to admit - I missed that part.
>
> One simple solution is to call ops_setup() around rte_eth_dev_attach_secondary()
> but if application doesn't invoke start() on Primary the ops will not be set for it.
I think rte_eth_dev_attach_secondary() wouldn't work, as majority of the PMDs setup
fast ops function pointers after it.
From reading the code rte_eth_dev_probing_finish() seems like a good choice -
as it is always the final point in device initialization for secondary process.
BTW, we also need something similar at de-init phase.
rte_eth_dev_release_port() seems like a good candidate for it.
>
> > rte_ethdev_trace_start(port_id);
> > return 0;
> > }
> >@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
> > return 0;
> > }
> >
> >+ /* point rx/tx functions to dummy ones */
> >+ eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> >+
> > dev->data->dev_started = 0;
> > ret = (*dev->dev_ops->dev_stop)(dev);
> > rte_ethdev_trace_stop(port_id, ret);
> >2.26.3
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs
@ 2021-10-03 18:05 3% ` Dmitry Kozlyuk
2021-10-04 10:37 0% ` [dpdk-dev] [EXT] " Harman Kalra
0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-10-03 18:05 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Ray Kinsella
2021-09-03 18:10 (UTC+0530), Harman Kalra:
> [...]
> diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
> new file mode 100644
> index 0000000000..2e4fed96f0
> --- /dev/null
> +++ b/lib/eal/common/eal_common_interrupts.c
> @@ -0,0 +1,506 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2021 Marvell.
> + */
> +
> +#include <stdlib.h>
> +#include <string.h>
> +
> +#include <rte_errno.h>
> +#include <rte_log.h>
> +#include <rte_malloc.h>
> +
> +#include <rte_interrupts.h>
> +
> +
> +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> + bool from_hugepage)
Since the purpose of the series is to reduce future ABI breakages,
how about making the second parameter "flags" to have some spare bits?
(If not removing it completely per David's suggestion.)
> +{
> + struct rte_intr_handle *intr_handle;
> + int i;
> +
> + if (from_hugepage)
> + intr_handle = rte_zmalloc(NULL,
> + size * sizeof(struct rte_intr_handle),
> + 0);
> + else
> + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
> + if (!intr_handle) {
> + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + for (i = 0; i < size; i++) {
> + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> + intr_handle[i].alloc_from_hugepage = from_hugepage;
> + }
> +
> + return intr_handle;
> +}
> +
> +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> + struct rte_intr_handle *intr_handle, int index)
If rte_intr_handle_instance_alloc() returns a pointer to an array,
this function is useless since the user can simply manipulate a pointer.
If we want to make a distinction between a single struct rte_intr_handle and a
commonly allocated bunch of such (but why?), then they should be represented
by distinct types.
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOMEM;
Why it's sometimes ENOMEM and sometimes ENOTSUP when the handle is not
allocated?
> + return NULL;
> + }
> +
> + return &intr_handle[index];
> +}
> +
> +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
> + const struct rte_intr_handle *src,
> + int index)
See above regarding the "index" parameter. If it can be removed, a better name
for this function would be rte_intr_handle_copy().
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (src == NULL) {
> + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + if (index < 0) {
> + RTE_LOG(ERR, EAL, "Index cany be negative");
> + rte_errno = EINVAL;
> + goto fail;
> + }
How about making this parameter "size_t"?
> +
> + intr_handle[index].fd = src->fd;
> + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> + intr_handle[index].type = src->type;
> + intr_handle[index].max_intr = src->max_intr;
> + intr_handle[index].nb_efd = src->nb_efd;
> + intr_handle[index].efd_counter_size = src->efd_counter_size;
> +
> + memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> + memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> +
> + return 0;
> +fail:
> + return rte_errno;
Should be (-rte_errno) per documentation.
Please check all functions in this file that return an "int" status.
> [...]
> +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->vfio_dev_fd;
> +fail:
> + return rte_errno;
> +}
Returning a errno value instead of an FD is very error-prone.
Probably returning (-1) is both safe and convenient?
> +
> +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> + int max_intr)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (max_intr > intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
Seems like this common/cnxk name leaked here by mistake?
> + max_intr, intr_handle->nb_intr);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->max_intr = max_intr;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->max_intr;
> +fail:
> + return rte_errno;
> +}
Should be negative per documentation and to avoid returning a positive value
that cannot be distinguished from a successful return.
Please also check other functions in this file returning an "int" result
(not status).
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v4] doc: remove event crypto metadata deprecation note
@ 2021-10-03 9:48 0% ` Gujjar, Abhinandan S
0 siblings, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2021-10-03 9:48 UTC (permalink / raw)
To: Shijith Thotton, dev; +Cc: adwivedi, anoobj, gakhil, jerinj, pbhagavatula
Acked-by: Abhinandan Gujjar <Abhinandan.gujjar@intel.com>
> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Monday, September 27, 2021 8:53 PM
> To: dev@dpdk.org
> Cc: Shijith Thotton <sthotton@marvell.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; adwivedi@marvell.com;
> anoobj@marvell.com; gakhil@marvell.com; jerinj@marvell.com;
> pbhagavatula@marvell.com
> Subject: [PATCH v4] doc: remove event crypto metadata deprecation note
>
> Proposed change to event crypto metadata is not done as per deprecation
> note. Instead, comments are updated in spec to improve readability.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
> v4:
> * Removed changes as per deprecation note.
> * Updated spec comments.
>
> v3:
> * Updated ABI section of release notes.
>
> v2:
> * Updated deprecation notice.
>
> v1:
> * Rebased.
>
> doc/guides/rel_notes/deprecation.rst | 6 ------
> lib/eventdev/rte_event_crypto_adapter.h | 1 +
> 2 files changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index bf1e07c0a8..fae3abd282 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -254,12 +254,6 @@ Deprecation Notices
> An 8-byte reserved field will be added to the structure ``rte_event_timer``
> to
> support future extensions.
>
> -* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a space
> holder
> - for ``response_info``. Both should be decoupled for better clarity.
> - New space for ``response_info`` can be made by changing
> - ``rte_event_crypto_metadata`` type to structure from union.
> - This change is targeted for DPDK 21.11.
> -
> * metrics: The function ``rte_metrics_init`` will have a non-void return
> in order to notify errors instead of calling ``rte_exit``.
>
> diff --git a/lib/eventdev/rte_event_crypto_adapter.h
> b/lib/eventdev/rte_event_crypto_adapter.h
> index 27fb628eef..edbd5c61a3 100644
> --- a/lib/eventdev/rte_event_crypto_adapter.h
> +++ b/lib/eventdev/rte_event_crypto_adapter.h
> @@ -227,6 +227,7 @@ union rte_event_crypto_metadata {
> struct rte_event_crypto_request request_info;
> /**< Request information to be filled in by application
> * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> + * First 8 bytes of request_info is reserved for response_info.
> */
> struct rte_event response_info;
> /**< Response information to be filled in by application
> --
> 2.25.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 04/13] eventdev: move inline APIs into separate structure
@ 2021-10-03 8:27 2% ` pbhagavatula
1 sibling, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-03 8:27 UTC (permalink / raw)
To: jerinj, Ray Kinsella; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intension is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
lib/eventdev/eventdev_pmd.h | 25 +++++++
lib/eventdev/eventdev_pmd_pci.h | 5 +-
lib/eventdev/eventdev_private.c | 112 +++++++++++++++++++++++++++++++
lib/eventdev/meson.build | 1 +
lib/eventdev/rte_eventdev.c | 12 +++-
lib/eventdev/rte_eventdev_core.h | 28 ++++++++
lib/eventdev/version.map | 5 ++
7 files changed, 186 insertions(+), 2 deletions(-)
create mode 100644 lib/eventdev/eventdev_private.c
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 7eb2aa0520..2f88dbd6d8 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -1189,4 +1189,29 @@ __rte_internal
int
rte_event_pmd_release(struct rte_eventdev *eventdev);
+/**
+ * Reset eventdevice fastpath APIs to dummy values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to reset.
+ */
+__rte_internal
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op);
+
+/**
+ * Set eventdevice fastpath APIs to event device values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to set.
+ */
+__rte_internal
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_ops,
+ const struct rte_eventdev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
#endif /* _RTE_EVENTDEV_PMD_H_ */
diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h
index 2f12a5eb24..563b579a77 100644
--- a/lib/eventdev/eventdev_pmd_pci.h
+++ b/lib/eventdev/eventdev_pmd_pci.h
@@ -67,8 +67,11 @@ rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
/* Invoke PMD device initialization function */
retval = devinit(eventdev);
- if (retval == 0)
+ if (retval == 0) {
+ event_dev_fp_ops_set(rte_event_fp_ops + eventdev->data->dev_id,
+ eventdev);
return 0;
+ }
RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
" failed", pci_drv->driver.name,
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
new file mode 100644
index 0000000000..9084833847
--- /dev/null
+++ b/lib/eventdev/eventdev_private.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "eventdev_pmd.h"
+#include "rte_eventdev.h"
+
+static uint16_t
+dummy_event_enqueue(__rte_unused void *port,
+ __rte_unused const struct rte_event *ev)
+{
+ RTE_EDEV_LOG_ERR(
+ "event enqueue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_enqueue_burst(__rte_unused void *port,
+ __rte_unused const struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event enqueue burst requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_dequeue(__rte_unused void *port, __rte_unused struct rte_event *ev,
+ __rte_unused uint64_t timeout_ticks)
+{
+ RTE_EDEV_LOG_ERR(
+ "event dequeue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_dequeue_burst(__rte_unused void *port,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events,
+ __rte_unused uint64_t timeout_ticks)
+{
+ RTE_EDEV_LOG_ERR(
+ "event dequeue burst requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue(__rte_unused void *port,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event Tx adapter enqueue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue_same_dest(__rte_unused void *port,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event Tx adapter enqueue same destination requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event crypto adapter enqueue requested for unconfigured event device");
+ return 0;
+}
+
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
+{
+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+ static const struct rte_event_fp_ops dummy = {
+ .enqueue = dummy_event_enqueue,
+ .enqueue_burst = dummy_event_enqueue_burst,
+ .enqueue_new_burst = dummy_event_enqueue_burst,
+ .enqueue_forward_burst = dummy_event_enqueue_burst,
+ .dequeue = dummy_event_dequeue,
+ .dequeue_burst = dummy_event_dequeue_burst,
+ .txa_enqueue = dummy_event_tx_adapter_enqueue,
+ .txa_enqueue_same_dest =
+ dummy_event_tx_adapter_enqueue_same_dest,
+ .ca_enqueue = dummy_event_crypto_adapter_enqueue,
+ .data = dummy_data,
+ };
+
+ *fp_op = dummy;
+}
+
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
+ const struct rte_eventdev *dev)
+{
+ fp_op->enqueue = dev->enqueue;
+ fp_op->enqueue_burst = dev->enqueue_burst;
+ fp_op->enqueue_new_burst = dev->enqueue_new_burst;
+ fp_op->enqueue_forward_burst = dev->enqueue_forward_burst;
+ fp_op->dequeue = dev->dequeue;
+ fp_op->dequeue_burst = dev->dequeue_burst;
+ fp_op->txa_enqueue = dev->txa_enqueue;
+ fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
+ fp_op->ca_enqueue = dev->ca_enqueue;
+ fp_op->data = dev->data->ports;
+}
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 8b51fde361..9051ff04b7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -8,6 +8,7 @@ else
endif
sources = files(
+ 'eventdev_private.c',
'rte_eventdev.c',
'rte_event_ring.c',
'eventdev_trace_points.c',
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index bfcfa31cd1..f14a887340 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -46,6 +46,9 @@ static struct rte_eventdev_global eventdev_globals = {
.nb_devs = 0
};
+/* Public fastpath APIs. */
+struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
/* Event dev north bound API implementation */
uint8_t
@@ -300,8 +303,8 @@ int
rte_event_dev_configure(uint8_t dev_id,
const struct rte_event_dev_config *dev_conf)
{
- struct rte_eventdev *dev;
struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
int diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -470,10 +473,13 @@ rte_event_dev_configure(uint8_t dev_id,
return diag;
}
+ event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
+
/* Configure the device */
diag = (*dev->dev_ops->dev_configure)(dev);
if (diag != 0) {
RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+ event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
event_dev_queue_config(dev, 0);
event_dev_port_config(dev, 0);
}
@@ -1244,6 +1250,8 @@ rte_event_dev_start(uint8_t dev_id)
else
return diag;
+ event_dev_fp_ops_set(rte_event_fp_ops + dev_id, dev);
+
return 0;
}
@@ -1284,6 +1292,7 @@ rte_event_dev_stop(uint8_t dev_id)
dev->data->dev_started = 0;
(*dev->dev_ops->dev_stop)(dev);
rte_eventdev_trace_stop(dev_id);
+ event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
}
int
@@ -1302,6 +1311,7 @@ rte_event_dev_close(uint8_t dev_id)
return -EBUSY;
}
+ event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
rte_eventdev_trace_close(dev_id);
return (*dev->dev_ops->dev_close)(dev);
}
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 115b97e431..4461073101 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -39,6 +39,34 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
+struct rte_event_fp_ops {
+ event_enqueue_t enqueue;
+ /**< PMD enqueue function. */
+ event_enqueue_burst_t enqueue_burst;
+ /**< PMD enqueue burst function. */
+ event_enqueue_burst_t enqueue_new_burst;
+ /**< PMD enqueue burst new function. */
+ event_enqueue_burst_t enqueue_forward_burst;
+ /**< PMD enqueue burst fwd function. */
+ event_dequeue_t dequeue;
+ /**< PMD dequeue function. */
+ event_dequeue_burst_t dequeue_burst;
+ /**< PMD dequeue burst function. */
+ event_tx_adapter_enqueue_t txa_enqueue;
+ /**< PMD Tx adapter enqueue function. */
+ event_tx_adapter_enqueue_t txa_enqueue_same_dest;
+ /**< PMD Tx adapter enqueue same destination function. */
+ event_crypto_adapter_enqueue_t ca_enqueue;
+ /**< PMD Crypto adapter enqueue function. */
+ uintptr_t reserved[2];
+
+ void **data;
+ /**< points to array of internal port data pointers */
+ uintptr_t reserved2[4];
+} __rte_cache_aligned;
+
+extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
#define RTE_EVENTDEV_NAME_MAX_LEN (64)
/**< @internal Max length of name of event PMD */
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 5f1fe412a4..33ab447d4b 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -85,6 +85,9 @@ DPDK_22 {
rte_event_timer_cancel_burst;
rte_eventdevs;
+ #added in 21.11
+ rte_event_fp_ops;
+
local: *;
};
@@ -141,6 +144,8 @@ EXPERIMENTAL {
INTERNAL {
global:
+ event_dev_fp_ops_reset;
+ event_dev_fp_ops_set;
rte_event_pmd_selftest_seqn_dynfield_offset;
rte_event_pmd_allocate;
rte_event_pmd_get_named_dev;
--
2.17.1
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH] net: promote make rarp packet API as stable
@ 2021-10-02 8:57 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-02 8:57 UTC (permalink / raw)
To: Xiao Wang; +Cc: dev, Olivier Matz, Stephen Hemminger, Xia, Chenbo
On Thu, Sep 16, 2021 at 1:38 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> On Wed, Sep 08, 2021 at 06:59:15PM +0800, Xiao Wang wrote:
> > rte_net_make_rarp_packet was introduced in version v18.02, there was no
> > change in this public API since then, and it's still being used by vhost
> > lib and virtio driver, so promote it as stable ABI.
> >
> > Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Applied, thanks.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure
@ 2021-10-02 0:14 0% Pavan Nikhilesh Bhagavatula
2021-10-03 20:58 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2021-10-02 0:14 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, Anoob Joseph, Jerin Jacob Kollanukkaran,
Nithin Kumar Dabilpuram, Ankur Dwivedi, shepard.siegel, ed.czeck,
john.miller, Igor Russkikh, ajit.khaparde, somnath.kotur,
rahul.lakkireddy, hemant.agrawal, sachin.saxena, haiyue.wang,
johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
Kiran Kumar Kokkilagadda, andrew.rybchenko, Maciej Czekaj [C],
jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
ferruh.yigit, mdr, jay.jayatheerthan
>Copy public function pointers (rx_pkt_burst(), etc.) and related
>pointers to internal data from rte_eth_dev structure into a
>separate flat array. That array will remain in a public header.
>The intention here is to make rte_eth_dev and related structures
>internal.
>That should allow future possible changes to core eth_dev structures
>to be transparent to the user and help to avoid ABI/API breakages.
>The plan is to keep minimal part of data from rte_eth_dev public,
>so we still can use inline functions for 'fast' calls
>(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>
>Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>---
> lib/ethdev/ethdev_private.c | 52
>++++++++++++++++++++++++++++++++++++
> lib/ethdev/ethdev_private.h | 7 +++++
> lib/ethdev/rte_ethdev.c | 17 ++++++++++++
> lib/ethdev/rte_ethdev_core.h | 45
>+++++++++++++++++++++++++++++++
> 4 files changed, 121 insertions(+)
>
<snip>
>+void
>+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
>+{
>+ static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
>+ static const struct rte_eth_fp_ops dummy_ops = {
>+ .rx_pkt_burst = dummy_eth_rx_burst,
>+ .tx_pkt_burst = dummy_eth_tx_burst,
>+ .rxq = {.data = dummy_data, .clbk = dummy_data,},
>+ .txq = {.data = dummy_data, .clbk = dummy_data,},
>+ };
>+
>+ *fpo = dummy_ops;
>+}
>+
>+void
>+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>+ const struct rte_eth_dev *dev)
>+{
>+ fpo->rx_pkt_burst = dev->rx_pkt_burst;
>+ fpo->tx_pkt_burst = dev->tx_pkt_burst;
>+ fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
>+ fpo->rx_queue_count = dev->rx_queue_count;
>+ fpo->rx_descriptor_status = dev->rx_descriptor_status;
>+ fpo->tx_descriptor_status = dev->tx_descriptor_status;
>+
>+ fpo->rxq.data = dev->data->rx_queues;
>+ fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
>+
>+ fpo->txq.data = dev->data->tx_queues;
>+ fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>+}
>diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
>index 3724429577..40333e7651 100644
>--- a/lib/ethdev/ethdev_private.h
>+++ b/lib/ethdev/ethdev_private.h
>@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev
>*_start, rte_eth_cmp_t cmp,
> /* Parse devargs value for representor parameter. */
> int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>
>+/* reset eth 'fast' API to dummy values */
>+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>+
>+/* setup eth 'fast' API to ethdev values */
>+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>+ const struct rte_eth_dev *dev);
>+
> #endif /* _ETH_PRIVATE_H_ */
>diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>index 424bc260fa..9fbb1bc3db 100644
>--- a/lib/ethdev/rte_ethdev.c
>+++ b/lib/ethdev/rte_ethdev.c
>@@ -44,6 +44,9 @@
> static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>
>+/* public 'fast' API */
>+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>+
> /* spinlock for eth device callbacks */
> static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>
>@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
> (*dev->dev_ops->link_update)(dev, 0);
> }
>
>+ /* expose selection of PMD rx/tx function */
>+ eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
>+
Secondary process will not set these properly I believe as it might not
call start() if it does primary process ops will not be set.
One simple solution is to call ops_setup() around rte_eth_dev_attach_secondary()
but if application doesn't invoke start() on Primary the ops will not be set for it.
> rte_ethdev_trace_start(port_id);
> return 0;
> }
>@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
> return 0;
> }
>
>+ /* point rx/tx functions to dummy ones */
>+ eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
>+
> dev->data->dev_started = 0;
> ret = (*dev->dev_ops->dev_stop)(dev);
> rte_ethdev_trace_stop(port_id, ret);
>2.26.3
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 1/3] ethdev: update modify field flow action
@ 2021-10-01 19:52 9% ` Viacheslav Ovsiienko
2021-10-04 9:38 0% ` Ori Kam
0 siblings, 1 reply; 200+ results
From: Viacheslav Ovsiienko @ 2021-10-01 19:52 UTC (permalink / raw)
To: dev; +Cc: rasland, matan, shahafs, orika, getelson, thomas
The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:
- immediate source can be presented either as an unsigned
64-bit integer or pointer to data pattern in memory.
There was no explicit pointer field defined in the union
- the byte ordering for 64-bit integer was not specified.
Many fields have lesser lengths and byte ordering
is crucial.
- how the bit offset is applied to the immediate source
field was not defined and documented
- 64-bit integer size is not enough to provide MAC and
IPv6 addresses
In order to cover the issues and exclude any ambiguities
the following is done:
- introduce the explicit pointer field
in rte_flow_action_modify_data structure
- replace the 64-bit unsigned integer with 16-byte array
- update the modify field flow action documentation
[1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 8 ++++++++
doc/guides/rel_notes/release_21_11.rst | 7 +++++++
lib/ethdev/rte_flow.h | 15 ++++++++++++---
3 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..a54760a7b4 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2835,6 +2835,14 @@ a packet to any other part of it.
``value`` sets an immediate value to be used as a source or points to a
location of the value in memory. It is used instead of ``level`` and ``offset``
for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+The data in memory should be presented exactly in the same byte order and
+length as in the relevant flow item, i.e. data for field with type
+RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field
+in rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
+rte_flow_item_ipv6 conventions, and so on. The bitfield exatracted from the
+memory being applied as second operation parameter is defined by width and
+the destination field offset. If the field size is large than 16 bytes the
+pattern can be provided as pointer only.
.. _table_rte_flow_action_modify_field:
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73e377a007..7db6cccab0 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -170,6 +170,10 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate data
+ array is extended, data pointer field is explicitly added to union, the
+ action behavior is defined in more strict fashion and documentation uddated.
+
ABI Changes
-----------
@@ -206,6 +210,9 @@ ABI Changes
and hard expiry limits. Limits can be either in number of packets or bytes.
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated.
+
+
Known Issues
------------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..af4c693ead 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3204,6 +3204,9 @@ enum rte_flow_field_id {
};
/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
* Field description for MODIFY_FIELD action.
*/
struct rte_flow_action_modify_data {
@@ -3217,10 +3220,16 @@ struct rte_flow_action_modify_data {
uint32_t offset;
};
/**
- * Immediate value for RTE_FLOW_FIELD_VALUE or
- * memory address for RTE_FLOW_FIELD_POINTER.
+ * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
+ * same byte order and length as in relevant rte_flow_item_xxx.
*/
- uint64_t value;
+ uint8_t value[16];
+ /*
+ * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
+ * should be the same as for relevant field in the
+ * rte_flow_item_xxx structure.
+ */
+ void *pvalue;
};
};
--
2.18.1
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
@ 2021-10-01 17:40 0% ` Ananyev, Konstantin
2021-10-04 8:46 3% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-01 17:40 UTC (permalink / raw)
To: Yigit, Ferruh, dev
Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay
> On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> > Rework 'fast' burst functions to use rte_eth_fp_ops[].
> > While it is an API/ABI breakage, this change is intended to be
> > transparent for both users (no changes in user app is required) and
> > PMD developers (no changes in PMD is required).
> > One extra thing to note - RX/TX callback invocation will cause extra
> > function call with these changes. That might cause some insignificant
> > slowdown for code-path where RX/TX callbacks are heavily involved.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> <...>
>
> > static inline int
> > rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> > {
> > - struct rte_eth_dev *dev;
> > + struct rte_eth_fp_ops *p;
> > + void *qd;
> > +
> > + if (port_id >= RTE_MAX_ETHPORTS ||
> > + queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Invalid port_id=%u or queue_id=%u\n",
> > + port_id, queue_id);
> > + return -EINVAL;
> > + }
>
> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
Original rte_eth_rx_queue_count() always have similar checks enabled,
that's why I also kept them 'always on'.
>
> <...>
>
> > +++ b/lib/ethdev/version.map
> > @@ -247,11 +247,16 @@ EXPERIMENTAL {
> > rte_mtr_meter_policy_delete;
> > rte_mtr_meter_policy_update;
> > rte_mtr_meter_policy_validate;
> > +
> > + # added in 21.05
>
> s/21.05/21.11/
>
> > + __rte_eth_rx_epilog;
> > + __rte_eth_tx_prolog;
>
> These are directly called by application and must be part of ABI, but marked as
> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> What about making them proper, non-internal, API?
Hmm not sure what do you suggest here.
We don't want users to call them explicitly.
They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
So I did what I thought is our usual policy for such semi-internal thigns:
have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
section.
What do you think it should be instead?
> > };
> >
> > INTERNAL {
> > global:
> >
> > + rte_eth_fp_ops;
>
> This variable is accessed in inline function, so accessed by application, not
> sure if it suits the 'internal' object definition, internal should be only for
> objects accessed by other parts of DPDK.
> I think this can be added to 'DPDK_22'.
>
> > rte_eth_dev_allocate;
> > rte_eth_dev_allocated;
> > rte_eth_dev_attach_secondary;
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures
@ 2021-10-01 17:02 0% ` Ferruh Yigit
2021-10-04 13:55 4% ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
2 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-10-01 17:02 UTC (permalink / raw)
To: Konstantin Ananyev, dev
Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
chenbo.xia, thomas, mdr, jay.jayatheerthan
On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> v3 changes:
> - Changes in public struct naming (Jerin/Haiyue)
> - Split patches
> - Update docs
> - Shamelessly included Andrew's patch:
> https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
> into these series.
> I have to do similar thing here, so decided to avoid duplicated effort.
>
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
>
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> related data pointer from rte_eth_dev into a separate flat array.
> We keep it public to still be able to use inline functions for these
> 'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> Note that apart from function pointers itself, each element of this
> flat array also contains two opaque pointers for each ethdev:
> 1) a pointer to an array of internal queue data pointers
> 2) points to array of queue callback data pointers.
> Note that exposing this extra information allows us to avoid extra
> changes inside PMD level, plus should help to avoid possible
> performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
> (rte_eth_rx_burst(), etc.) to use new public flat array.
> While it is an ABI breakage, this change is intended to be transparent
> for both users (no changes in user app is required) and PMD developers
> (no changes in PMD is required).
> One extra note - with new implementation RX/TX callback invocation
> will cost one extra function call with this changes. That might cause
> some slowdown for code-path with RX/TX callbacks heavily involved.
> Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> things into internal header: <ethdev_driver.h>.
>
> That approach was selected to:
> - Avoid(/minimize) possible performance losses.
> - Minimize required changes inside PMDs.
>
Overall +1 to the approach.
Only 'metrics' library is failing to build and it also needs to include driver
header:
diff --git a/lib/metrics/rte_metrics_telemetry.c
b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef6135c..5be21b2e86c5 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
* Copyright(c) 2020 Intel Corporation
*/
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
#include <rte_string_fns.h>
#ifdef RTE_LIB_TELEMETRY
#include <telemetry_internal.h>
> Performance testing results (ICX 2.0GHz, E810 (ice)):
> - testpmd macswap fwd mode, plus
> a) no RX/TX callbacks:
> no actual slowdown observed
> b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> ~2% slowdown
> - l3fwd: no actual slowdown observed
>
> Would like to thank Ferruh and Jerrin for reviewing and testing previous
> versions of these series. All other interested parties please don't be shy
> and provide your feedback.
>
> Konstantin Ananyev (7):
> ethdev: allocate max space for internal queue array
> ethdev: change input parameters for rx_queue_count
> ethdev: copy ethdev 'fast' API into separate structure
> ethdev: make burst functions to use new flat array
> ethdev: add API to retrieve multiple ethernet addresses
> ethdev: remove legacy Rx descriptor done API
> ethdev: hide eth dev related structures
>
<...>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 10/10] doc: update release note
@ 2021-10-01 16:59 4% ` Fan Zhang
0 siblings, 0 replies; 200+ results
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
To: dev; +Cc: gakhil, Fan Zhang
This patch updates the release note to describe qat refactor
changes made.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 3ade7fe5ac..02a61be76b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -157,6 +157,10 @@ API Changes
the crypto/security operation. This field will be used to communicate
events such as soft expiry with IPsec in lookaside mode.
+* common/qat: QAT PMD is refactored to divide generation specific control
+ path code into dedicated files. This change also applies qat compression,
+ qat symmetric crypto, and qat asymmetric crypto.
+
ABI Changes
-----------
--
2.25.1
^ permalink raw reply [relevance 4%]
Results 3201-3400 of ~18000 next (older) | prev (newer) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2020-05-12 20:40 [dpdk-dev] [RFC v2] hash: unify crc32 API header for x86 and ARM pbhagavatula
2021-10-03 23:00 1% ` [dpdk-dev] [PATCH v3 1/2] hash: split x86 and SW hash CRC intrinsics pbhagavatula
2021-10-04 5:52 1% ` [dpdk-dev] [PATCH v4 " pbhagavatula
2021-03-18 6:34 [dpdk-dev] [PATCH 1/6] baseband: introduce NXP LA12xx driver Hemant Agrawal
2021-10-07 9:33 ` [dpdk-dev] [PATCH v9 0/8] baseband: add " nipun.gupta
2021-10-07 9:33 4% ` [dpdk-dev] [PATCH v9 1/8] bbdev: add device info related to data endianness assumption nipun.gupta
2021-10-11 4:32 ` [dpdk-dev] [PATCH v10 0/8] baseband: add NXP LA12xx driver nipun.gupta
2021-10-11 4:32 4% ` [dpdk-dev] [PATCH v10 1/8] bbdev: add device info related to data endianness nipun.gupta
2021-05-27 15:28 [dpdk-dev] [PATCH] net: introduce IPv4 ihl and version fields Gregory Etelson
2021-10-04 12:13 4% ` [dpdk-dev] [PATCH v4] " Gregory Etelson
2021-10-12 12:29 4% ` [dpdk-dev] [PATCH v5] " Gregory Etelson
2021-07-02 13:18 [dpdk-dev] [PATCH] dmadev: introduce DMA device library Chengwen Feng
2021-10-09 9:33 3% ` [dpdk-dev] [PATCH v24 0/6] support dmadev Chengwen Feng
2021-07-12 16:17 [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name Andrew Rybchenko
2021-09-13 11:26 ` [dpdk-dev] [PATCH v5] " Andrew Rybchenko
2021-10-01 11:39 ` Andrew Rybchenko
2021-10-08 8:39 0% ` Xueming(Steven) Li
2021-10-08 9:27 4% ` [dpdk-dev] [PATCH v6] " Andrew Rybchenko
2021-10-11 12:30 4% ` [dpdk-dev] [PATCH v7] " Andrew Rybchenko
2021-10-11 12:53 4% ` [dpdk-dev] [PATCH v8] " Andrew Rybchenko
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 " Radu Nicolau
2021-10-11 11:29 5% ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
2021-10-12 10:23 0% ` Ananyev, Konstantin
2021-10-11 11:29 5% ` [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
2021-10-12 10:24 0% ` Ananyev, Konstantin
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
2021-10-13 12:13 5% ` [dpdk-dev] [PATCH v9 01/10] security: add ESN field to ipsec_xform Radu Nicolau
2021-10-13 12:13 5% ` [dpdk-dev] [PATCH v9 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
2021-07-31 18:13 [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Akhil Goyal
2021-10-08 20:45 3% ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
2021-10-11 8:31 0% ` Thomas Monjalon
2021-10-11 16:58 0% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-11 22:15 3% ` Stephen Hemminger
2021-10-12 8:31 0% ` Kinsella, Ray
2021-10-12 6:59 0% ` Thomas Monjalon
2021-10-12 8:53 0% ` Kinsella, Ray
2021-10-12 8:50 0% ` [dpdk-dev] " Kinsella, Ray
2021-10-11 10:46 0% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Zhang, Roy Fan
2021-10-12 9:55 3% ` Kinsella, Ray
2021-10-12 10:19 4% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-12 10:50 0% ` Anoob Joseph
2021-10-12 11:28 0% ` Kinsella, Ray
2021-10-12 11:34 0% ` Anoob Joseph
2021-10-12 11:52 0% ` Thomas Monjalon
2021-10-12 13:38 0% ` Anoob Joseph
2021-10-12 13:54 0% ` Thomas Monjalon
2021-10-12 14:18 0% ` Anoob Joseph
2021-10-12 14:47 0% ` Kinsella, Ray
2021-10-12 15:06 0% ` Thomas Monjalon
2021-10-13 5:36 0% ` Anoob Joseph
2021-10-13 7:02 3% ` Thomas Monjalon
2021-10-13 7:04 0% ` Anoob Joseph
2021-10-13 8:39 3% ` Kinsella, Ray
2021-08-13 16:51 [dpdk-dev] [PATCH v1 0/6] bbdev update related to CRC usage Nicolas Chautru
2021-08-13 16:51 ` [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16 check Nicolas Chautru
2021-10-11 20:17 3% ` Thomas Monjalon
2021-10-11 20:38 0% ` Chautru, Nicolas
2021-10-12 6:53 3% ` Thomas Monjalon
2021-10-12 16:36 4% ` Chautru, Nicolas
2021-10-12 16:59 0% ` Thomas Monjalon
2021-08-19 21:31 [dpdk-dev] [PATCH v14 0/9] eal: Add EAL API for threading Narcisa Ana Maria Vasile
2021-10-08 22:40 3% ` [dpdk-dev] [PATCH v15 " Narcisa Ana Maria Vasile
2021-10-09 7:41 3% ` [dpdk-dev] [PATCH v16 " Narcisa Ana Maria Vasile
2021-08-23 19:40 [dpdk-dev] [RFC 01/15] eventdev: make driver interface as internal pbhagavatula
2021-10-03 8:26 ` [dpdk-dev] [PATCH v2 01/13] " pbhagavatula
2021-10-03 8:27 2% ` [dpdk-dev] [PATCH v2 04/13] eventdev: move inline APIs into separate structure pbhagavatula
2021-10-06 6:49 ` [dpdk-dev] [PATCH v3 01/14] eventdev: make driver interface as internal pbhagavatula
2021-10-06 6:50 2% ` [dpdk-dev] [PATCH v3 04/14] eventdev: move inline APIs into separate structure pbhagavatula
2021-10-06 6:50 ` [dpdk-dev] [PATCH v3 14/14] eventdev: mark trace variables as internal pbhagavatula
2021-10-06 7:11 5% ` David Marchand
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 " Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-10-03 18:05 3% ` Dmitry Kozlyuk
2021-10-04 10:37 0% ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-05 12:14 4% ` [dpdk-dev] [PATCH v2 0/6] make rte_intr_handle internal Harman Kalra
2021-10-05 12:14 1% ` [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-05 16:07 0% ` [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Stephen Hemminger
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 " Andrew Rybchenko
2021-10-11 14:48 2% ` [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
2021-08-29 12:51 [dpdk-dev] [PATCH 0/8] cryptodev: hide internal strutures Akhil Goyal
2021-10-11 12:43 ` [dpdk-dev] [PATCH v2 0/5] cryptodev: hide internal structures Akhil Goyal
2021-10-11 12:43 2% ` [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-10-11 14:45 0% ` Zhang, Roy Fan
2021-10-11 12:43 3% ` [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array Akhil Goyal
2021-10-11 14:54 0% ` Zhang, Roy Fan
2021-08-31 7:56 [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures Shijith Thotton
2021-09-27 15:22 ` [dpdk-dev] [PATCH v4] doc: remove event crypto metadata deprecation note Shijith Thotton
2021-10-03 9:48 0% ` Gujjar, Abhinandan S
2021-09-01 5:30 [dpdk-dev] [PATCH 0/2] *** support IOMMU for DMA device *** Xuan Ding
2021-10-11 7:59 3% ` [dpdk-dev] [PATCH v7 0/2] Support IOMMU for DMA device Xuan Ding
2021-09-01 12:20 [dpdk-dev] [PATCH] pipeline: remove experimental tag from API Jasvinder Singh
2021-09-27 10:17 ` Thomas Monjalon
2021-10-12 20:34 ` Dumitrescu, Cristian
2021-10-12 21:52 ` Thomas Monjalon
2021-10-13 8:51 3% ` Kinsella, Ray
2021-10-13 9:40 0% ` Thomas Monjalon
2021-10-13 9:43 4% ` Kinsella, Ray
2021-10-13 9:49 0% ` Thomas Monjalon
2021-10-13 10:02 4% ` Kinsella, Ray
2021-10-13 11:11 0% ` Bruce Richardson
2021-10-13 11:42 0% ` Kinsella, Ray
2021-10-13 11:58 0% ` Thomas Monjalon
2021-09-01 14:47 [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
2021-10-01 16:59 4% ` [dpdk-dev] [PATCH v2 10/10] doc: update release note Fan Zhang
2021-09-03 0:47 [dpdk-dev] [PATCH 0/5] Packet capture framework enhancements Stephen Hemminger
2021-10-12 10:21 ` [dpdk-dev] [PATCH v12 00/12] Packet capture framework update Pattan, Reshma
2021-10-12 15:44 ` Stephen Hemminger
2021-10-12 15:48 ` Thomas Monjalon
2021-10-12 18:00 3% ` Stephen Hemminger
2021-10-12 18:22 0% ` Thomas Monjalon
2021-10-13 8:44 0% ` Pattan, Reshma
2021-09-08 10:59 [dpdk-dev] [PATCH] net: promote make rarp packet API as stable Xiao Wang
2021-09-16 11:38 ` Olivier Matz
2021-10-02 8:57 0% ` David Marchand
2021-09-10 2:23 [dpdk-dev] [PATCH 0/8] Removal of PCI bus ABIs Chenbo Xia
2021-09-18 2:24 ` [dpdk-dev] [PATCH v2 0/7] " Chenbo Xia
2021-09-29 7:38 ` Xia, Chenbo
2021-09-30 8:45 ` David Marchand
2021-10-04 13:37 ` David Marchand
2021-10-04 15:56 ` Harris, James R
2021-10-06 4:25 ` Xia, Chenbo
2021-10-08 6:15 4% ` Liu, Changpeng
2021-10-08 7:08 0% ` David Marchand
2021-10-08 7:44 0% ` Liu, Changpeng
2021-10-11 6:58 0% ` Xia, Chenbo
2021-10-12 7:04 ` Thomas Monjalon
2021-10-12 16:59 ` Walker, Benjamin
2021-10-12 18:43 ` Thomas Monjalon
2021-10-12 19:26 3% ` Walker, Benjamin
2021-10-12 21:50 3% ` Thomas Monjalon
2021-09-10 14:16 [dpdk-dev] [RFC 1/3] ethdev: update modify field flow action Viacheslav Ovsiienko
2021-10-01 19:52 ` [dpdk-dev] [PATCH 0/3] " Viacheslav Ovsiienko
2021-10-01 19:52 9% ` [dpdk-dev] [PATCH 1/3] " Viacheslav Ovsiienko
2021-10-04 9:38 0% ` Ori Kam
2021-10-10 23:45 ` [dpdk-dev] [PATCH v2 0/5] " Viacheslav Ovsiienko
2021-10-10 23:45 8% ` [dpdk-dev] [PATCH v2 1/5] " Viacheslav Ovsiienko
2021-10-11 9:54 3% ` Andrew Rybchenko
2021-10-12 8:06 ` [dpdk-dev] [PATCH v3 0/5] " Viacheslav Ovsiienko
2021-10-12 8:06 3% ` [dpdk-dev] [PATCH v3 1/5] " Viacheslav Ovsiienko
2021-10-12 20:25 ` [dpdk-dev] [PATCH v5 0/5] " Viacheslav Ovsiienko
2021-10-12 20:25 3% ` [dpdk-dev] [PATCH v5 1/5] " Viacheslav Ovsiienko
2021-09-17 2:15 [dpdk-dev] [PATCH v2 2/2] app/test: Delete cmdline free function zhihongx.peng
2021-10-08 6:41 4% ` [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit zhihongx.peng
2021-10-11 5:20 0% ` Peng, ZhihongX
2021-10-11 8:25 0% ` Dmitry Kozlyuk
2021-10-13 1:53 0% ` Peng, ZhihongX
2021-10-13 2:36 4% ` Dmitry Kozlyuk
2021-10-13 3:12 0% ` Peng, ZhihongX
2021-10-13 1:52 4% ` [dpdk-dev] [PATCH v4 " zhihongx.peng
2021-09-22 14:09 [dpdk-dev] [RFC v2 0/5] hide eth dev related structures Konstantin Ananyev
2021-10-01 14:02 ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
2021-10-01 14:02 ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
2021-10-01 16:46 ` Ferruh Yigit
2021-10-01 17:40 0% ` Ananyev, Konstantin
2021-10-04 8:46 3% ` Ferruh Yigit
2021-10-04 9:20 0% ` Ananyev, Konstantin
2021-10-04 10:13 3% ` Ferruh Yigit
2021-10-04 11:17 0% ` Ananyev, Konstantin
2021-10-01 17:02 0% ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Ferruh Yigit
2021-10-04 13:55 4% ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
2021-10-04 13:55 6% ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-04 13:55 2% ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
2021-10-05 13:09 0% ` Thomas Monjalon
2021-10-05 16:41 0% ` Ananyev, Konstantin
2021-10-04 13:56 2% ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
2021-10-05 9:54 0% ` David Marchand
2021-10-05 10:13 0% ` Ananyev, Konstantin
2021-10-04 13:56 9% ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-05 10:04 0% ` David Marchand
2021-10-05 10:43 0% ` Ferruh Yigit
2021-10-06 16:42 0% ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
2021-10-06 17:26 0% ` Ali Alnubani
2021-10-07 11:27 4% ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
2021-10-07 11:27 ` [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
2021-10-11 9:20 ` Andrew Rybchenko
2021-10-11 16:25 3% ` Ananyev, Konstantin
2021-10-11 17:15 0% ` Andrew Rybchenko
2021-10-11 23:06 0% ` Ananyev, Konstantin
2021-10-12 5:47 0% ` Andrew Rybchenko
2021-10-07 11:27 6% ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-11 8:06 0% ` Andrew Rybchenko
2021-10-12 17:59 0% ` Hyong Youb Kim (hyonkim)
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
2021-10-09 12:05 0% ` fengchengwen
2021-10-11 1:18 0% ` fengchengwen
2021-10-11 8:35 0% ` Andrew Rybchenko
2021-10-11 15:15 0% ` Ananyev, Konstantin
2021-10-11 8:25 0% ` Andrew Rybchenko
2021-10-11 16:52 0% ` Ananyev, Konstantin
2021-10-11 17:22 0% ` Andrew Rybchenko
2021-10-07 11:27 2% ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
2021-10-11 9:02 0% ` Andrew Rybchenko
2021-10-11 15:47 0% ` Ananyev, Konstantin
2021-10-11 17:03 0% ` Andrew Rybchenko
2021-10-07 11:27 9% ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-08 18:13 0% ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
2021-10-11 9:22 0% ` Andrew Rybchenko
2021-09-22 18:04 [dpdk-dev] [PATCH 0/3] ethdev: introduce configurable flexible item Viacheslav Ovsiienko
2021-10-12 10:49 ` [dpdk-dev] [PATCH v4 0/5] ethdev: update modify field flow action Viacheslav Ovsiienko
2021-10-12 10:49 3% ` [dpdk-dev] [PATCH v4 1/5] " Viacheslav Ovsiienko
2021-09-23 9:45 [dpdk-dev] [RFC PATCH v8 0/5] Add PIE support for HQoS library Liguzinski, WojciechX
2021-10-11 7:55 3% ` [dpdk-dev] [PATCH v9 " Liguzinski, WojciechX
2021-09-28 19:47 [dpdk-dev] [PATCH v2 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-05 0:55 4% ` [dpdk-dev] [PATCH v3 " Dmitry Kozlyuk
2021-10-05 0:55 4% ` [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-05 0:55 3% ` [dpdk-dev] [PATCH v3 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-05 20:15 4% ` [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-05 20:15 3% ` [dpdk-dev] [PATCH v4 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2021-10-07 22:10 4% ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-07 22:10 4% ` [dpdk-dev] [PATCH v5 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-07 22:10 3% ` [dpdk-dev] [PATCH v5 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2021-09-29 14:52 [dpdk-dev] [PATCH 0/4] net/mlx5: implicit mempool registration dkozlyuk
2021-10-12 0:04 ` [dpdk-dev] [PATCH v3 " Dmitry Kozlyuk
2021-10-12 0:04 4% ` [dpdk-dev] [PATCH v3 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-12 3:37 0% ` Jerin Jacob
2021-10-12 6:42 0% ` Andrew Rybchenko
2021-10-13 11:01 ` [dpdk-dev] [PATCH v4 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-10-13 11:01 4% ` [dpdk-dev] [PATCH v4 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-09-30 12:58 [dpdk-dev] [PATCH v4 0/3] add SA config option for inner pkt csum Archana Muniganti
2021-09-30 12:58 ` [dpdk-dev] [PATCH v4 1/3] security: " Archana Muniganti
2021-10-03 21:09 0% ` Ananyev, Konstantin
2021-09-30 14:50 [dpdk-dev] [PATCH 0/3] crypto/security session framework rework Akhil Goyal
2021-09-30 14:50 ` [dpdk-dev] [PATCH 3/3] cryptodev: rework session framework Akhil Goyal
2021-10-01 15:53 ` Zhang, Roy Fan
2021-10-04 19:07 0% ` Akhil Goyal
2021-10-01 13:47 [dpdk-dev] [PATCH v1 00/12] ethdev: rework transfer flow API Andrew Rybchenko
2021-10-13 16:42 ` [dpdk-dev] [PATCH v4 " Ivan Malov
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 01/12] ethdev: add port representor item to " Ivan Malov
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 02/12] ethdev: add represented port " Ivan Malov
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 03/12] ethdev: add port representor action " Ivan Malov
2021-10-13 16:42 4% ` [dpdk-dev] [PATCH v4 04/12] ethdev: add represented port " Ivan Malov
2021-10-13 16:42 3% ` [dpdk-dev] [PATCH v4 05/12] ethdev: deprecate hard-to-use or ambiguous items and actions Ivan Malov
2021-10-01 16:33 [dpdk-dev] [PATCH v2 0/2] net: Windows compatibility renaming Dmitry Kozlyuk
2021-10-07 22:07 ` [dpdk-dev] [PATCH v3 " Dmitry Kozlyuk
2021-10-07 22:07 1% ` [dpdk-dev] [PATCH v3 1/2] net: rename Ethernet header fields Dmitry Kozlyuk
2021-10-02 0:02 [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array Pavan Nikhilesh Bhagavatula
2021-10-03 21:05 3% ` Ananyev, Konstantin
2021-10-02 0:14 0% [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Pavan Nikhilesh Bhagavatula
2021-10-03 20:58 0% ` Ananyev, Konstantin
2021-10-03 21:10 0% ` Pavan Nikhilesh Bhagavatula
2021-10-04 6:36 12% [dpdk-dev] [PATCH] cryptodev: extend data-unit length field Matan Azrad
2021-10-04 12:55 4% [dpdk-dev] [PATCH v1] ci: update machine meson option to platform Juraj Linkeš
2021-10-04 13:29 4% ` [dpdk-dev] [PATCH v2] " Juraj Linkeš
2021-10-11 13:40 4% ` [dpdk-dev] [PATCH v3] " Juraj Linkeš
2021-10-05 9:16 4% [dpdk-dev] [PATCH] sort symbols map David Marchand
2021-10-05 14:16 0% ` Kinsella, Ray
2021-10-05 14:31 0% ` David Marchand
2021-10-05 15:06 0% ` David Marchand
2021-10-11 11:36 0% ` Dumitrescu, Cristian
2021-10-06 20:58 [dpdk-dev] [PATCH v9] bbdev: add device info related to data endianness assumption Nicolas Chautru
2021-10-06 20:58 4% ` Nicolas Chautru
2021-10-07 12:01 0% ` Tom Rix
2021-10-07 15:19 0% ` Chautru, Nicolas
2021-10-07 13:13 0% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-07 15:41 0% ` Chautru, Nicolas
2021-10-07 16:49 0% ` Nipun Gupta
2021-10-07 18:58 0% ` Chautru, Nicolas
2021-10-08 4:34 0% ` Nipun Gupta
2021-10-07 12:57 [dpdk-dev] [PATCH v1] eventdev/rx-adapter: add telemetry callbacks Ganapati Kundapura
2021-10-11 16:14 3% ` Jerin Jacob
2021-10-12 8:35 3% ` Kundapura, Ganapati
2021-10-12 8:47 4% ` Jerin Jacob
2021-10-12 9:10 3% ` Thomas Monjalon
2021-10-12 9:26 0% ` Jerin Jacob
2021-10-12 10:05 3% ` Kinsella, Ray
2021-10-12 10:29 0% ` Kundapura, Ganapati
2021-10-13 8:03 [dpdk-dev] [PATCH v1 0/1] ci: enable DPDK GHA for arm64 with self-hosted runners Serena He
2021-10-13 8:03 7% ` [dpdk-dev] [PATCH v1 1/1] " Serena He
2021-10-13 11:32 0% ` Michael Santana
2021-10-13 8:57 11% [dpdk-dev] [PATCH] mempool: fix name size in mempool structure Andrew Rybchenko
2021-10-13 11:07 4% ` David Marchand
2021-10-13 11:14 0% ` Andrew Rybchenko
[not found] <0211007112750.25526-1-konstantin.ananyev@intel.com>
2021-10-13 13:36 4% ` [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures Konstantin Ananyev
2021-10-13 13:37 6% ` [dpdk-dev] [PATCH v6 2/6] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-13 13:37 2% ` [dpdk-dev] [PATCH v6 3/6] ethdev: copy fast-path API into separate structure Konstantin Ananyev
2021-10-13 14:25 0% ` Andrew Rybchenko
2021-10-13 13:37 2% ` [dpdk-dev] [PATCH v6 4/6] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
2021-10-13 13:37 8% ` [dpdk-dev] [PATCH v6 6/6] ethdev: hide eth dev related structures Konstantin Ananyev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).