* [PATCH v1 0/8] add IPv6 extension push remove @ 2023-04-17 9:25 Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 1/8] ethdev: add IPv6 extension push remove action Rongwei Liu ` (7 more replies) 0 siblings, 8 replies; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas Add new rte actions to push or remove specific IPv6 extension header from the network packets. Rongwei Liu (8): ethdev: add IPv6 extension push remove action app/testpmd: add IPv6 extension push remove cli net/mlx5/hws: add no reparse support net/mlx5: sample the srv6 last segment net/mlx5: generate srv6 modify header resource net/mlx5/hws: add IPv6 routing extension push pop actions net/mlx5/hws: add setter for IPv6 routing push pop net/mlx5: implement IPv6 routing push pop app/test-pmd/cmdline_flow.c | 443 +++++++++++++++++++++- doc/guides/nics/mlx5.rst | 9 +- doc/guides/prog_guide/rte_flow.rst | 21 ++ drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 41 ++ drivers/net/mlx5/hws/mlx5dr_action.c | 524 +++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 5 + drivers/net/mlx5/hws/mlx5dr_cmd.c | 5 +- drivers/net/mlx5/hws/mlx5dr_cmd.h | 1 + drivers/net/mlx5/hws/mlx5dr_debug.c | 10 +- drivers/net/mlx5/hws/mlx5dr_matcher.c | 80 ++-- drivers/net/mlx5/hws/mlx5dr_matcher.h | 12 +- drivers/net/mlx5/hws/mlx5dr_rule.c | 65 +++- drivers/net/mlx5/mlx5.c | 42 ++- drivers/net/mlx5/mlx5.h | 31 ++ drivers/net/mlx5/mlx5_flow.h | 40 +- drivers/net/mlx5/mlx5_flow_dv.c | 386 +++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 268 ++++++++++++- lib/ethdev/rte_flow.c | 2 + lib/ethdev/rte_flow.h | 52 +++ 20 files changed, 1958 insertions(+), 80 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 1/8] ethdev: add IPv6 extension push remove action 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu @ 2023-04-17 9:25 ` Rongwei Liu 2023-05-24 6:55 ` Ori Kam 2023-04-17 9:25 ` [PATCH v1 2/8] app/testpmd: add IPv6 extension push remove cli Rongwei Liu ` (6 subsequent siblings) 7 siblings, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas; +Cc: Ferruh Yigit, Andrew Rybchenko Add new rte_actions to push and remove the specific type of IPv6 extension to and from original packets. A new extension to be pushed should be the last extension due to the next header awareness. Remove can support the IPv6 extension in any position. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- doc/guides/prog_guide/rte_flow.rst | 21 ++++++++++++ lib/ethdev/rte_flow.c | 2 ++ lib/ethdev/rte_flow.h | 52 ++++++++++++++++++++++++++++++ 3 files changed, 75 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..2fe42e1cea 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3300,6 +3300,27 @@ The ``quota`` value is reduced according to ``mode`` setting. | ``RTE_FLOW_QUOTA_MODE_L3`` | Count packet bytes starting from L3 | +------------------+----------------------------------------------------+ +Action: ``IPV6_EXT_PUSH`` +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Add an IPv6 extension into IPv6 header and its template is provided in +its data buffer with the specific type as defined in the +``rte_flow_action_ipv6_ext_push`` definition. + +This action modifies the payload of matched flows. The data supplied must +be a valid extension in the specified type, it should be added the last one +if preceding extension existed. When applied to the original packet the +resulting packet must be a valid packet. + +Action: ``IPV6_EXT_REMOVE`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Remove an IPv6 extension whose type is provided in its type as defined in +the ``rte_flow_action_ipv6_ext_remove``. + +This action modifies the payload of matched flow and the packet should be +valid after removing. + Negative types ~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..af4b3f6da4 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -259,6 +259,8 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)), MK_FLOW_ACTION(SEND_TO_KERNEL, 0), MK_FLOW_ACTION(QUOTA, sizeof(struct rte_flow_action_quota)), + MK_FLOW_ACTION(IPV6_EXT_PUSH, sizeof(struct rte_flow_action_ipv6_ext_push)), + MK_FLOW_ACTION(IPV6_EXT_REMOVE, sizeof(struct rte_flow_action_ipv6_ext_remove)), }; int diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..369ecbc6ba 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2912,6 +2912,25 @@ enum rte_flow_action_type { * applied to the given ethdev Rx queue. */ RTE_FLOW_ACTION_TYPE_SKIP_CMAN, + + /** + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH + * + * Push IPv6 extension into IPv6 packet. + * + * @see struct rte_flow_action_ipv6_ext_push. + */ + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + + /** + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE + * + * Remove IPv6 extension from IPv6 packet whose type + * is provided in its configuration buffer. + * + * @see struct rte_flow_action_ipv6_ext_remove. + */ + RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE, }; /** @@ -3352,6 +3371,39 @@ struct rte_flow_action_vxlan_encap { struct rte_flow_item *definition; }; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH + * + * Valid flow definition for RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH include: + * + * - IPV6_EXT TYPE / IPV6_EXT_HEADER_IN_TYPE / END + * + * size holds the number of bytes in @p data. + * The data must be added as the last IPv6 extension. + */ +struct rte_flow_action_ipv6_ext_push { + uint8_t *data; /**< IPv6 extension header data. */ + size_t size; /**< Size of @p data. */ + uint8_t type; /**< Type of IPv6 extension. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE + * + * Valid flow definition for RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE include: + * + * - IPV6_EXT TYPE / END + */ +struct rte_flow_action_ipv6_ext_remove { + uint8_t type; /**< Type of IPv6 extension. */ +}; + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v1 1/8] ethdev: add IPv6 extension push remove action 2023-04-17 9:25 ` [PATCH v1 1/8] ethdev: add IPv6 extension push remove action Rongwei Liu @ 2023-05-24 6:55 ` Ori Kam 2023-05-24 7:39 ` [PATCH v1 0/2] add IPv6 extension push remove Rongwei Liu 0 siblings, 1 reply; 64+ messages in thread From: Ori Kam @ 2023-05-24 6:55 UTC (permalink / raw) To: Rongwei Liu, dev, Matan Azrad, Slava Ovsiienko, NBU-Contact-Thomas Monjalon (EXTERNAL) Cc: Ferruh Yigit, Andrew Rybchenko Hi Rongwei, > -----Original Message----- > From: Rongwei Liu <rongweil@nvidia.com> > Sent: Monday, April 17, 2023 12:26 PM > > Add new rte_actions to push and remove the specific > type of IPv6 extension to and from original packets. > > A new extension to be pushed should be the last extension > due to the next header awareness. > > Remove can support the IPv6 extension in any position. > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com> > --- > doc/guides/prog_guide/rte_flow.rst | 21 ++++++++++++ > lib/ethdev/rte_flow.c | 2 ++ > lib/ethdev/rte_flow.h | 52 ++++++++++++++++++++++++++++++ > 3 files changed, 75 insertions(+) > > diff --git a/doc/guides/prog_guide/rte_flow.rst > b/doc/guides/prog_guide/rte_flow.rst > index 32fc45516a..2fe42e1cea 100644 > --- a/doc/guides/prog_guide/rte_flow.rst > +++ b/doc/guides/prog_guide/rte_flow.rst > @@ -3300,6 +3300,27 @@ The ``quota`` value is reduced according to > ``mode`` setting. > | ``RTE_FLOW_QUOTA_MODE_L3`` | Count packet bytes starting from L3 > | > +------------------+----------------------------------------------------+ > > +Action: ``IPV6_EXT_PUSH`` > +^^^^^^^^^^^^^^^^^^^^^^^^^ > + > +Add an IPv6 extension into IPv6 header and its template is provided in > +its data buffer with the specific type as defined in the > +``rte_flow_action_ipv6_ext_push`` definition. > + > +This action modifies the payload of matched flows. The data supplied must > +be a valid extension in the specified type, it should be added the last one > +if preceding extension existed. When applied to the original packet the > +resulting packet must be a valid packet. > + > +Action: ``IPV6_EXT_REMOVE`` > +^^^^^^^^^^^^^^^^^^^^^^^^^^^ > + > +Remove an IPv6 extension whose type is provided in its type as defined in > +the ``rte_flow_action_ipv6_ext_remove``. > + > +This action modifies the payload of matched flow and the packet should be > +valid after removing. > + > Negative types > ~~~~~~~~~~~~~~ > > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c > index 69e6e749f7..af4b3f6da4 100644 > --- a/lib/ethdev/rte_flow.c > +++ b/lib/ethdev/rte_flow.c > @@ -259,6 +259,8 @@ static const struct rte_flow_desc_data > rte_flow_desc_action[] = { > MK_FLOW_ACTION(METER_MARK, sizeof(struct > rte_flow_action_meter_mark)), > MK_FLOW_ACTION(SEND_TO_KERNEL, 0), > MK_FLOW_ACTION(QUOTA, sizeof(struct rte_flow_action_quota)), > + MK_FLOW_ACTION(IPV6_EXT_PUSH, sizeof(struct > rte_flow_action_ipv6_ext_push)), > + MK_FLOW_ACTION(IPV6_EXT_REMOVE, sizeof(struct > rte_flow_action_ipv6_ext_remove)), > }; > > int > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h > index 713ba8b65c..369ecbc6ba 100644 > --- a/lib/ethdev/rte_flow.h > +++ b/lib/ethdev/rte_flow.h > @@ -2912,6 +2912,25 @@ enum rte_flow_action_type { > * applied to the given ethdev Rx queue. > */ > RTE_FLOW_ACTION_TYPE_SKIP_CMAN, > + > + /** > + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH > + * > + * Push IPv6 extension into IPv6 packet. > + * > + * @see struct rte_flow_action_ipv6_ext_push. > + */ > + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, > + > + /** > + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE > + * > + * Remove IPv6 extension from IPv6 packet whose type > + * is provided in its configuration buffer. > + * > + * @see struct rte_flow_action_ipv6_ext_remove. > + */ > + RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE, > }; > > /** > @@ -3352,6 +3371,39 @@ struct rte_flow_action_vxlan_encap { > struct rte_flow_item *definition; > }; > > +/** > + * @warning > + * @b EXPERIMENTAL: this structure may change without prior notice > + * > + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH > + * > + * Valid flow definition for RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH > include: > + * > + * - IPV6_EXT TYPE / IPV6_EXT_HEADER_IN_TYPE / END > + * > + * size holds the number of bytes in @p data. > + * The data must be added as the last IPv6 extension. > + */ > +struct rte_flow_action_ipv6_ext_push { > + uint8_t *data; /**< IPv6 extension header data. */ > + size_t size; /**< Size of @p data. */ > + uint8_t type; /**< Type of IPv6 extension. */ > +}; > + > +/** > + * @warning > + * @b EXPERIMENTAL: this structure may change without prior notice > + * > + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE > + * > + * Valid flow definition for RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE > include: > + * > + * - IPV6_EXT TYPE / END > + */ > +struct rte_flow_action_ipv6_ext_remove { > + uint8_t type; /**< Type of IPv6 extension. */ > +}; > + > /** > * @warning > * @b EXPERIMENTAL: this structure may change without prior notice > -- > 2.27.0 You need to add the new action to release notes. Acked-by: Ori Kam <orika@nvidia.com> ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 0/2] add IPv6 extension push remove 2023-05-24 6:55 ` Ori Kam @ 2023-05-24 7:39 ` Rongwei Liu 2023-05-24 7:39 ` [PATCH v1 1/2] ethdev: add IPv6 extension push remove action Rongwei Liu ` (2 more replies) 0 siblings, 3 replies; 64+ messages in thread From: Rongwei Liu @ 2023-05-24 7:39 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Add new rte_actions to push and remove the specific type of IPv6 extension to and from original packets. v1: Split the PMD implementation, add a description into release notes. Rongwei Liu (2): ethdev: add IPv6 extension push remove action app/testpmd: add IPv6 extension push remove cli app/test-pmd/cmdline_flow.c | 443 ++++++++++++++++++++++++- doc/guides/prog_guide/rte_flow.rst | 21 ++ doc/guides/rel_notes/release_23_07.rst | 6 + lib/ethdev/rte_flow.c | 2 + lib/ethdev/rte_flow.h | 52 +++ 5 files changed, 523 insertions(+), 1 deletion(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 1/2] ethdev: add IPv6 extension push remove action 2023-05-24 7:39 ` [PATCH v1 0/2] add IPv6 extension push remove Rongwei Liu @ 2023-05-24 7:39 ` Rongwei Liu 2023-05-24 10:30 ` Ori Kam 2023-05-24 7:39 ` [PATCH v1 2/2] app/testpmd: add IPv6 extension push remove cli Rongwei Liu 2023-06-02 14:39 ` [PATCH v1 0/2] add IPv6 extension push remove Ferruh Yigit 2 siblings, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-05-24 7:39 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Cc: Ferruh Yigit, Andrew Rybchenko Add new rte_actions to push and remove the specific type of IPv6 extension to and from original packets. A new extension to be pushed should be the last extension due to the next header awareness. Remove can support the IPv6 extension in any position. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- doc/guides/prog_guide/rte_flow.rst | 21 +++++++++++ doc/guides/rel_notes/release_23_07.rst | 6 +++ lib/ethdev/rte_flow.c | 2 + lib/ethdev/rte_flow.h | 52 ++++++++++++++++++++++++++ 4 files changed, 81 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..2fe42e1cea 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3300,6 +3300,27 @@ The ``quota`` value is reduced according to ``mode`` setting. | ``RTE_FLOW_QUOTA_MODE_L3`` | Count packet bytes starting from L3 | +------------------+----------------------------------------------------+ +Action: ``IPV6_EXT_PUSH`` +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Add an IPv6 extension into IPv6 header and its template is provided in +its data buffer with the specific type as defined in the +``rte_flow_action_ipv6_ext_push`` definition. + +This action modifies the payload of matched flows. The data supplied must +be a valid extension in the specified type, it should be added the last one +if preceding extension existed. When applied to the original packet the +resulting packet must be a valid packet. + +Action: ``IPV6_EXT_REMOVE`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Remove an IPv6 extension whose type is provided in its type as defined in +the ``rte_flow_action_ipv6_ext_remove``. + +This action modifies the payload of matched flow and the packet should be +valid after removing. + Negative types ~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..4543cfde1f 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -27,6 +27,12 @@ New Features .. This section should contain new features added in this release. Sample format: + * **Added actions to push or remove the specific type of IPv6 extension.** + + Added ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` and ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` + to push or remove the specific IPv6 extension into or from the packets. + Push always put the new extension as the last one due to the next header awareness. + * **Add a title in the past tense with a full stop.** Add a short 1-2 sentence description in the past tense. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..af4b3f6da4 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -259,6 +259,8 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)), MK_FLOW_ACTION(SEND_TO_KERNEL, 0), MK_FLOW_ACTION(QUOTA, sizeof(struct rte_flow_action_quota)), + MK_FLOW_ACTION(IPV6_EXT_PUSH, sizeof(struct rte_flow_action_ipv6_ext_push)), + MK_FLOW_ACTION(IPV6_EXT_REMOVE, sizeof(struct rte_flow_action_ipv6_ext_remove)), }; int diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..369ecbc6ba 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2912,6 +2912,25 @@ enum rte_flow_action_type { * applied to the given ethdev Rx queue. */ RTE_FLOW_ACTION_TYPE_SKIP_CMAN, + + /** + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH + * + * Push IPv6 extension into IPv6 packet. + * + * @see struct rte_flow_action_ipv6_ext_push. + */ + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + + /** + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE + * + * Remove IPv6 extension from IPv6 packet whose type + * is provided in its configuration buffer. + * + * @see struct rte_flow_action_ipv6_ext_remove. + */ + RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE, }; /** @@ -3352,6 +3371,39 @@ struct rte_flow_action_vxlan_encap { struct rte_flow_item *definition; }; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH + * + * Valid flow definition for RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH include: + * + * - IPV6_EXT TYPE / IPV6_EXT_HEADER_IN_TYPE / END + * + * size holds the number of bytes in @p data. + * The data must be added as the last IPv6 extension. + */ +struct rte_flow_action_ipv6_ext_push { + uint8_t *data; /**< IPv6 extension header data. */ + size_t size; /**< Size of @p data. */ + uint8_t type; /**< Type of IPv6 extension. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE + * + * Valid flow definition for RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE include: + * + * - IPV6_EXT TYPE / END + */ +struct rte_flow_action_ipv6_ext_remove { + uint8_t type; /**< Type of IPv6 extension. */ +}; + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v1 1/2] ethdev: add IPv6 extension push remove action 2023-05-24 7:39 ` [PATCH v1 1/2] ethdev: add IPv6 extension push remove action Rongwei Liu @ 2023-05-24 10:30 ` Ori Kam 0 siblings, 0 replies; 64+ messages in thread From: Ori Kam @ 2023-05-24 10:30 UTC (permalink / raw) To: Rongwei Liu, dev, Matan Azrad, Slava Ovsiienko, Suanming Mou, NBU-Contact-Thomas Monjalon (EXTERNAL) Cc: Ferruh Yigit, Andrew Rybchenko Hi, > -----Original Message----- > From: Rongwei Liu <rongweil@nvidia.com> > Sent: Wednesday, May 24, 2023 10:39 AM > > Add new rte_actions to push and remove the specific > type of IPv6 extension to and from original packets. > > A new extension to be pushed should be the last extension > due to the next header awareness. > > Remove can support the IPv6 extension in any position. > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com> > --- Acked-by: Ori Kam <orika@nvidia.com> Best, Ori ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 2/2] app/testpmd: add IPv6 extension push remove cli 2023-05-24 7:39 ` [PATCH v1 0/2] add IPv6 extension push remove Rongwei Liu 2023-05-24 7:39 ` [PATCH v1 1/2] ethdev: add IPv6 extension push remove action Rongwei Liu @ 2023-05-24 7:39 ` Rongwei Liu 2023-06-02 14:39 ` [PATCH v1 0/2] add IPv6 extension push remove Ferruh Yigit 2 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-05-24 7:39 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Cc: Aman Singh, Yuying Zhang Add command lines to generate IPv6 routing extension push and remove patterns and follow the raw_encap/decap style. Add the new actions to the action template parsing. Generating the action patterns 1. IPv6 routing extension push set ipv6_ext_push 1 ipv6_ext type is 43 / ipv6_routing_ext ext_type is 4 ext_next_hdr is 17 ext_seg_left is 2 / end_set 2. IPv6 routing extension remove set ipv6_ext_remove 1 ipv6_ext type is 43 / end_set Specifying the action in the template 1. actions_template_id 1 template ipv6_ext_push index 1 2. actions_template_id 1 template ipv6_ext_remove index 1 Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- app/test-pmd/cmdline_flow.c | 443 +++++++++++++++++++++++++++++++++++- 1 file changed, 442 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..ea4cebce1c 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -74,6 +74,9 @@ enum index { SET_RAW_INDEX, SET_SAMPLE_ACTIONS, SET_SAMPLE_INDEX, + SET_IPV6_EXT_REMOVE, + SET_IPV6_EXT_PUSH, + SET_IPV6_EXT_INDEX, /* Top-level command. */ FLOW, @@ -496,6 +499,8 @@ enum index { ITEM_QUOTA_STATE_NAME, ITEM_AGGR_AFFINITY, ITEM_AGGR_AFFINITY_VALUE, + ITEM_IPV6_PUSH_REMOVE_EXT, + ITEM_IPV6_PUSH_REMOVE_EXT_TYPE, /* Validate/create actions. */ ACTIONS, @@ -665,6 +670,12 @@ enum index { ACTION_QUOTA_QU_LIMIT, ACTION_QUOTA_QU_UPDATE_OP, ACTION_QUOTA_QU_UPDATE_OP_NAME, + ACTION_IPV6_EXT_REMOVE, + ACTION_IPV6_EXT_REMOVE_INDEX, + ACTION_IPV6_EXT_REMOVE_INDEX_VALUE, + ACTION_IPV6_EXT_PUSH, + ACTION_IPV6_EXT_PUSH_INDEX, + ACTION_IPV6_EXT_PUSH_INDEX_VALUE, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -731,6 +742,42 @@ struct action_raw_decap_data { uint16_t idx; }; +/** Maximum data size in struct rte_flow_action_ipv6_ext_push. */ +#define ACTION_IPV6_EXT_PUSH_MAX_DATA 512 +#define IPV6_EXT_PUSH_CONFS_MAX_NUM 8 + +/** Storage for struct rte_flow_action_ipv6_ext_push. */ +struct ipv6_ext_push_conf { + uint8_t data[ACTION_IPV6_EXT_PUSH_MAX_DATA]; + size_t size; + uint8_t type; +}; + +struct ipv6_ext_push_conf ipv6_ext_push_confs[IPV6_EXT_PUSH_CONFS_MAX_NUM]; + +/** Storage for struct rte_flow_action_ipv6_ext_push including external data. */ +struct action_ipv6_ext_push_data { + struct rte_flow_action_ipv6_ext_push conf; + uint8_t data[ACTION_IPV6_EXT_PUSH_MAX_DATA]; + uint8_t type; + uint16_t idx; +}; + +/** Storage for struct rte_flow_action_ipv6_ext_remove. */ +struct ipv6_ext_remove_conf { + struct rte_flow_action_ipv6_ext_remove conf; + uint8_t type; +}; + +struct ipv6_ext_remove_conf ipv6_ext_remove_confs[IPV6_EXT_PUSH_CONFS_MAX_NUM]; + +/** Storage for struct rte_flow_action_ipv6_ext_remove including external data. */ +struct action_ipv6_ext_remove_data { + struct rte_flow_action_ipv6_ext_remove conf; + uint8_t type; + uint16_t idx; +}; + struct vxlan_encap_conf vxlan_encap_conf = { .select_ipv4 = 1, .select_vlan = 0, @@ -2022,6 +2069,8 @@ static const enum index next_action[] = { ACTION_SEND_TO_KERNEL, ACTION_QUOTA_CREATE, ACTION_QUOTA_QU, + ACTION_IPV6_EXT_REMOVE, + ACTION_IPV6_EXT_PUSH, ZERO, }; @@ -2230,6 +2279,18 @@ static const enum index action_raw_decap[] = { ZERO, }; +static const enum index action_ipv6_ext_remove[] = { + ACTION_IPV6_EXT_REMOVE_INDEX, + ACTION_NEXT, + ZERO, +}; + +static const enum index action_ipv6_ext_push[] = { + ACTION_IPV6_EXT_PUSH_INDEX, + ACTION_NEXT, + ZERO, +}; + static const enum index action_set_tag[] = { ACTION_SET_TAG_DATA, ACTION_SET_TAG_INDEX, @@ -2293,6 +2354,22 @@ static const enum index next_action_sample[] = { ZERO, }; +static const enum index item_ipv6_push_ext[] = { + ITEM_IPV6_PUSH_REMOVE_EXT, + ZERO, +}; + +static const enum index item_ipv6_push_ext_type[] = { + ITEM_IPV6_PUSH_REMOVE_EXT_TYPE, + ZERO, +}; + +static const enum index item_ipv6_push_ext_header[] = { + ITEM_IPV6_ROUTING_EXT, + ITEM_NEXT, + ZERO, +}; + static const enum index action_modify_field_dst[] = { ACTION_MODIFY_FIELD_DST_LEVEL, ACTION_MODIFY_FIELD_DST_OFFSET, @@ -2334,6 +2411,9 @@ static int parse_set_raw_encap_decap(struct context *, const struct token *, static int parse_set_sample_action(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_set_ipv6_ext_action(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_set_init(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2411,6 +2491,22 @@ static int parse_vc_action_raw_encap_index(struct context *, static int parse_vc_action_raw_decap_index(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_action_ipv6_ext_remove(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_remove_index(struct context *ctx, + const struct token *token, + const char *str, unsigned int len, + void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_push_index(struct context *ctx, + const struct token *token, + const char *str, unsigned int len, + void *buf, + unsigned int size); static int parse_vc_action_set_meta(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, @@ -2596,6 +2692,8 @@ static int comp_set_raw_index(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_sample_index(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_set_ipv6_ext_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size); static int comp_set_modify_field_op(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_modify_field_id(struct context *, const struct token *, @@ -6472,6 +6570,48 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), .call = parse_vc, }, + [ACTION_IPV6_EXT_REMOVE] = { + .name = "ipv6_ext_remove", + .help = "IPv6 extension type, defined by set ipv6_ext_remove", + .priv = PRIV_ACTION(IPV6_EXT_REMOVE, + sizeof(struct action_ipv6_ext_remove_data)), + .next = NEXT(action_ipv6_ext_remove), + .call = parse_vc_action_ipv6_ext_remove, + }, + [ACTION_IPV6_EXT_REMOVE_INDEX] = { + .name = "index", + .help = "the index of ipv6_ext_remove", + .next = NEXT(NEXT_ENTRY(ACTION_IPV6_EXT_REMOVE_INDEX_VALUE)), + }, + [ACTION_IPV6_EXT_REMOVE_INDEX_VALUE] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "unsigned integer value", + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_ipv6_ext_remove_index, + .comp = comp_set_ipv6_ext_index, + }, + [ACTION_IPV6_EXT_PUSH] = { + .name = "ipv6_ext_push", + .help = "IPv6 extension data, defined by set ipv6_ext_push", + .priv = PRIV_ACTION(IPV6_EXT_PUSH, + sizeof(struct action_ipv6_ext_push_data)), + .next = NEXT(action_ipv6_ext_push), + .call = parse_vc_action_ipv6_ext_push, + }, + [ACTION_IPV6_EXT_PUSH_INDEX] = { + .name = "index", + .help = "the index of ipv6_ext_push", + .next = NEXT(NEXT_ENTRY(ACTION_IPV6_EXT_PUSH_INDEX_VALUE)), + }, + [ACTION_IPV6_EXT_PUSH_INDEX_VALUE] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "unsigned integer value", + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_ipv6_ext_push_index, + .comp = comp_set_ipv6_ext_index, + }, /* Top level command. */ [SET] = { .name = "set", @@ -6481,7 +6621,9 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY (SET_RAW_ENCAP, SET_RAW_DECAP, - SET_SAMPLE_ACTIONS)), + SET_SAMPLE_ACTIONS, + SET_IPV6_EXT_REMOVE, + SET_IPV6_EXT_PUSH)), .call = parse_set_init, }, /* Sub-level commands. */ @@ -6529,6 +6671,49 @@ static const struct token token_list[] = { 0, RAW_SAMPLE_CONFS_MAX_NUM - 1)), .call = parse_set_sample_action, }, + [SET_IPV6_EXT_PUSH] = { + .name = "ipv6_ext_push", + .help = "set IPv6 extension header", + .next = NEXT(NEXT_ENTRY(SET_IPV6_EXT_INDEX)), + .args = ARGS(ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct buffer, port), + sizeof(((struct buffer *)0)->port), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1)), + .call = parse_set_ipv6_ext_action, + }, + [SET_IPV6_EXT_REMOVE] = { + .name = "ipv6_ext_remove", + .help = "set IPv6 extension header", + .next = NEXT(NEXT_ENTRY(SET_IPV6_EXT_INDEX)), + .args = ARGS(ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct buffer, port), + sizeof(((struct buffer *)0)->port), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1)), + .call = parse_set_ipv6_ext_action, + }, + [SET_IPV6_EXT_INDEX] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "index of ipv6 extension push/remove actions", + .next = NEXT(item_ipv6_push_ext), + .call = parse_port, + }, + [ITEM_IPV6_PUSH_REMOVE_EXT] = { + .name = "ipv6_ext", + .help = "set IPv6 extension header", + .priv = PRIV_ITEM(IPV6_EXT, + sizeof(struct rte_flow_item_ipv6_ext)), + .next = NEXT(item_ipv6_push_ext_type), + .call = parse_vc, + }, + [ITEM_IPV6_PUSH_REMOVE_EXT_TYPE] = { + .name = "type", + .help = "set IPv6 extension type", + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_ext, + next_hdr)), + .next = NEXT(item_ipv6_push_ext_header, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + }, [ACTION_SET_TAG] = { .name = "set_tag", .help = "set tag", @@ -8843,6 +9028,140 @@ parse_vc_action_raw_decap(struct context *ctx, const struct token *token, return ret; } +static int +parse_vc_action_ipv6_ext_remove(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_ipv6_ext_remove_data *ipv6_ext_remove_data = NULL; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Copy the headers to the buffer. */ + ipv6_ext_remove_data = ctx->object; + ipv6_ext_remove_data->conf.type = ipv6_ext_remove_confs[0].type; + action->conf = &ipv6_ext_remove_data->conf; + return ret; +} + +static int +parse_vc_action_ipv6_ext_remove_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct action_ipv6_ext_remove_data *action_ipv6_ext_remove_data; + struct rte_flow_action *action; + const struct arg *arg; + struct buffer *out = buf; + int ret; + uint16_t idx; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + arg = ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct action_ipv6_ext_remove_data, idx), + sizeof(((struct action_ipv6_ext_remove_data *)0)->idx), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1); + if (push_args(ctx, arg)) + return -1; + ret = parse_int(ctx, token, str, len, NULL, 0); + if (ret < 0) { + pop_args(ctx); + return -1; + } + if (!ctx->object) + return len; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + action_ipv6_ext_remove_data = ctx->object; + idx = action_ipv6_ext_remove_data->idx; + action_ipv6_ext_remove_data->conf.type = ipv6_ext_remove_confs[idx].type; + action->conf = &action_ipv6_ext_remove_data->conf; + return len; +} + +static int +parse_vc_action_ipv6_ext_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_ipv6_ext_push_data *ipv6_ext_push_data = NULL; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Copy the headers to the buffer. */ + ipv6_ext_push_data = ctx->object; + ipv6_ext_push_data->conf.type = ipv6_ext_push_confs[0].type; + ipv6_ext_push_data->conf.data = ipv6_ext_push_confs[0].data; + ipv6_ext_push_data->conf.size = ipv6_ext_push_confs[0].size; + action->conf = &ipv6_ext_push_data->conf; + return ret; +} + +static int +parse_vc_action_ipv6_ext_push_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct action_ipv6_ext_push_data *action_ipv6_ext_push_data; + struct rte_flow_action *action; + const struct arg *arg; + struct buffer *out = buf; + int ret; + uint16_t idx; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + arg = ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct action_ipv6_ext_push_data, idx), + sizeof(((struct action_ipv6_ext_push_data *)0)->idx), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1); + if (push_args(ctx, arg)) + return -1; + ret = parse_int(ctx, token, str, len, NULL, 0); + if (ret < 0) { + pop_args(ctx); + return -1; + } + if (!ctx->object) + return len; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + action_ipv6_ext_push_data = ctx->object; + idx = action_ipv6_ext_push_data->idx; + action_ipv6_ext_push_data->conf.type = ipv6_ext_push_confs[idx].type; + action_ipv6_ext_push_data->conf.size = ipv6_ext_push_confs[idx].size; + action_ipv6_ext_push_data->conf.data = ipv6_ext_push_confs[idx].data; + action->conf = &action_ipv6_ext_push_data->conf; + return len; +} + static int parse_vc_action_set_meta(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, @@ -10532,6 +10851,35 @@ parse_set_sample_action(struct context *ctx, const struct token *token, return len; } +/** Parse set command, initialize output buffer for subsequent tokens. */ +static int +parse_set_ipv6_ext_action(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + /* Make sure buffer is large enough. */ + if (size < sizeof(*out)) + return -1; + ctx->objdata = 0; + ctx->objmask = NULL; + ctx->object = out; + if (!out->command) + return -1; + out->command = ctx->curr; + /* For ipv6_ext_push/remove we need is pattern */ + out->args.vc.pattern = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; +} + /** * Parse set raw_encap/raw_decap command, * initialize output buffer for subsequent tokens. @@ -10961,6 +11309,24 @@ comp_set_raw_index(struct context *ctx, const struct token *token, return nb; } +/** Complete index number for set raw_ipv6_ext_push/ipv6_ext_remove commands. */ +static int +comp_set_ipv6_ext_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + uint16_t idx = 0; + uint16_t nb = 0; + + RTE_SET_USED(ctx); + RTE_SET_USED(token); + for (idx = 0; idx < IPV6_EXT_PUSH_CONFS_MAX_NUM; ++idx) { + if (buf && idx == ent) + return snprintf(buf, size, "%u", idx); + ++nb; + } + return nb; +} + /** Complete index number for set raw_encap/raw_decap commands. */ static int comp_set_sample_index(struct context *ctx, const struct token *token, @@ -11855,6 +12221,78 @@ flow_item_default_mask(const struct rte_flow_item *item) return mask; } +/** Dispatch parsed buffer to function calls. */ +static void +cmd_set_ipv6_ext_parsed(const struct buffer *in) +{ + uint32_t n = in->args.vc.pattern_n; + int i = 0; + struct rte_flow_item *item = NULL; + size_t size = 0; + uint8_t *data = NULL; + uint8_t *type = NULL; + size_t *total_size = NULL; + uint16_t idx = in->port; /* We borrow port field as index */ + struct rte_flow_item_ipv6_routing_ext *ext; + const struct rte_flow_item_ipv6_ext *ipv6_ext; + + RTE_ASSERT(in->command == SET_IPV6_EXT_PUSH || + in->command == SET_IPV6_EXT_REMOVE); + + if (in->command == SET_IPV6_EXT_REMOVE) { + if (n != 1 || in->args.vc.pattern->type != + RTE_FLOW_ITEM_TYPE_IPV6_EXT) { + fprintf(stderr, "Error - Not supported item\n"); + return; + } + type = (uint8_t *)&ipv6_ext_remove_confs[idx].type; + item = in->args.vc.pattern; + ipv6_ext = item->spec; + *type = ipv6_ext->next_hdr; + return; + } + + total_size = &ipv6_ext_push_confs[idx].size; + data = (uint8_t *)&ipv6_ext_push_confs[idx].data; + type = (uint8_t *)&ipv6_ext_push_confs[idx].type; + + *total_size = 0; + memset(data, 0x00, ACTION_RAW_ENCAP_MAX_DATA); + for (i = n - 1 ; i >= 0; --i) { + item = in->args.vc.pattern + i; + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_IPV6_EXT: + ipv6_ext = item->spec; + *type = ipv6_ext->next_hdr; + break; + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: + ext = (struct rte_flow_item_ipv6_routing_ext *)(uintptr_t)item->spec; + if (!ext->hdr.hdr_len) { + size = sizeof(struct rte_ipv6_routing_ext) + + (ext->hdr.segments_left << 4); + ext->hdr.hdr_len = ext->hdr.segments_left << 1; + /* Indicate no TLV once SRH. */ + if (ext->hdr.type == 4) + ext->hdr.last_entry = ext->hdr.segments_left - 1; + } else { + size = sizeof(struct rte_ipv6_routing_ext) + + (ext->hdr.hdr_len << 3); + } + *total_size += size; + memcpy(data, ext, size); + break; + default: + fprintf(stderr, "Error - Not supported item\n"); + goto error; + } + } + RTE_ASSERT((*total_size) <= ACTION_IPV6_EXT_PUSH_MAX_DATA); + return; +error: + *total_size = 0; + memset(data, 0x00, ACTION_IPV6_EXT_PUSH_MAX_DATA); +} + /** Dispatch parsed buffer to function calls. */ static void cmd_set_raw_parsed_sample(const struct buffer *in) @@ -11988,6 +12426,9 @@ cmd_set_raw_parsed(const struct buffer *in) if (in->command == SET_SAMPLE_ACTIONS) return cmd_set_raw_parsed_sample(in); + else if (in->command == SET_IPV6_EXT_PUSH || + in->command == SET_IPV6_EXT_REMOVE) + return cmd_set_ipv6_ext_parsed(in); RTE_ASSERT(in->command == SET_RAW_ENCAP || in->command == SET_RAW_DECAP); if (in->command == SET_RAW_ENCAP) { -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [PATCH v1 0/2] add IPv6 extension push remove 2023-05-24 7:39 ` [PATCH v1 0/2] add IPv6 extension push remove Rongwei Liu 2023-05-24 7:39 ` [PATCH v1 1/2] ethdev: add IPv6 extension push remove action Rongwei Liu 2023-05-24 7:39 ` [PATCH v1 2/2] app/testpmd: add IPv6 extension push remove cli Rongwei Liu @ 2023-06-02 14:39 ` Ferruh Yigit 2023-07-10 2:32 ` Rongwei Liu 2 siblings, 1 reply; 64+ messages in thread From: Ferruh Yigit @ 2023-06-02 14:39 UTC (permalink / raw) To: Rongwei Liu, matan, viacheslavo, orika, suanmingm, thomas; +Cc: dev On 5/24/2023 8:39 AM, Rongwei Liu wrote: > Add new rte_actions to push and remove the specific > type of IPv6 extension to and from original packets. > > v1: Split the PMD implementation, add a description into release notes. > > Rongwei Liu (2): > ethdev: add IPv6 extension push remove action > app/testpmd: add IPv6 extension push remove cli > Series applied to dpdk-next-net/main, thanks. ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v1 0/2] add IPv6 extension push remove 2023-06-02 14:39 ` [PATCH v1 0/2] add IPv6 extension push remove Ferruh Yigit @ 2023-07-10 2:32 ` Rongwei Liu 2023-07-10 8:55 ` Ferruh Yigit 0 siblings, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-07-10 2:32 UTC (permalink / raw) To: Ferruh Yigit, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL), Andrew Rybchenko Cc: dev, Matan Azrad, Slava Ovsiienko, Suanming Mou Hi Ferruh & Andrew & Ori & Thomas: Sorry, we can't commit the PMD implementation for "IPv6 extension push remove" feature in time for this release. There are some dis-agreements which need to be addressed internally. We will continue to work on this and plan to push it in the next release. RFC link: https://patchwork.dpdk.org/project/dpdk/cover/20230417022630.2377505-1-rongweil@nvidia.com/ V1 patch with full PMD implementation: https://patchwork.dpdk.org/project/dpdk/cover/20230417092540.2617450-1-rongweil@nvidia.com/ BR Rongwei > -----Original Message----- > From: Ferruh Yigit <ferruh.yigit@amd.com> > Sent: Friday, June 2, 2023 22:39 > To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>; > Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; > Suanming Mou <suanmingm@nvidia.com>; NBU-Contact-Thomas Monjalon > (EXTERNAL) <thomas@monjalon.net> > Cc: dev@dpdk.org > Subject: Re: [PATCH v1 0/2] add IPv6 extension push remove > > External email: Use caution opening links or attachments > > > On 5/24/2023 8:39 AM, Rongwei Liu wrote: > > Add new rte_actions to push and remove the specific type of IPv6 > > extension to and from original packets. > > > > v1: Split the PMD implementation, add a description into release notes. > > > > Rongwei Liu (2): > > ethdev: add IPv6 extension push remove action > > app/testpmd: add IPv6 extension push remove cli > > > > Series applied to dpdk-next-net/main, thanks. ^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [PATCH v1 0/2] add IPv6 extension push remove 2023-07-10 2:32 ` Rongwei Liu @ 2023-07-10 8:55 ` Ferruh Yigit 2023-07-10 14:41 ` Stephen Hemminger 0 siblings, 1 reply; 64+ messages in thread From: Ferruh Yigit @ 2023-07-10 8:55 UTC (permalink / raw) To: Rongwei Liu, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL), Andrew Rybchenko Cc: dev, Matan Azrad, Slava Ovsiienko, Suanming Mou On 7/10/2023 3:32 AM, Rongwei Liu wrote: > Hi Ferruh & Andrew & Ori & Thomas: > Sorry, we can't commit the PMD implementation for "IPv6 extension push remove" feature in time for this release. > There are some dis-agreements which need to be addressed internally. > We will continue to work on this and plan to push it in the next release. > > RFC link: https://patchwork.dpdk.org/project/dpdk/cover/20230417022630.2377505-1-rongweil@nvidia.com/ > V1 patch with full PMD implementation: https://patchwork.dpdk.org/project/dpdk/cover/20230417092540.2617450-1-rongweil@nvidia.com/ > Hi Rongwei, Thanks for the heads up. As long as there is a plan to upstream driver implementation, I think it is OK to keep ethdev change and wait for driver implementation for better design instead of rushing it for this release with lower quality (although target should be to have driver changes with same release with API changes for future features). >> -----Original Message----- >> From: Ferruh Yigit <ferruh.yigit@amd.com> >> Sent: Friday, June 2, 2023 22:39 >> To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>; >> Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; >> Suanming Mou <suanmingm@nvidia.com>; NBU-Contact-Thomas Monjalon >> (EXTERNAL) <thomas@monjalon.net> >> Cc: dev@dpdk.org >> Subject: Re: [PATCH v1 0/2] add IPv6 extension push remove >> >> External email: Use caution opening links or attachments >> >> >> On 5/24/2023 8:39 AM, Rongwei Liu wrote: >>> Add new rte_actions to push and remove the specific type of IPv6 >>> extension to and from original packets. >>> >>> v1: Split the PMD implementation, add a description into release notes. >>> >>> Rongwei Liu (2): >>> ethdev: add IPv6 extension push remove action >>> app/testpmd: add IPv6 extension push remove cli >>> >> >> Series applied to dpdk-next-net/main, thanks. > ^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [PATCH v1 0/2] add IPv6 extension push remove 2023-07-10 8:55 ` Ferruh Yigit @ 2023-07-10 14:41 ` Stephen Hemminger 2023-07-11 6:16 ` Thomas Monjalon 0 siblings, 1 reply; 64+ messages in thread From: Stephen Hemminger @ 2023-07-10 14:41 UTC (permalink / raw) To: Ferruh Yigit Cc: Rongwei Liu, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL), Andrew Rybchenko, dev, Matan Azrad, Slava Ovsiienko, Suanming Mou On Mon, 10 Jul 2023 09:55:59 +0100 Ferruh Yigit <ferruh.yigit@amd.com> wrote: > On 7/10/2023 3:32 AM, Rongwei Liu wrote: > > Hi Ferruh & Andrew & Ori & Thomas: > > Sorry, we can't commit the PMD implementation for "IPv6 extension push remove" feature in time for this release. > > There are some dis-agreements which need to be addressed internally. > > We will continue to work on this and plan to push it in the next release. > > > > RFC link: https://patchwork.dpdk.org/project/dpdk/cover/20230417022630.2377505-1-rongweil@nvidia.com/ > > V1 patch with full PMD implementation: https://patchwork.dpdk.org/project/dpdk/cover/20230417092540.2617450-1-rongweil@nvidia.com/ > > > > Hi Rongwei, > > Thanks for the heads up. > As long as there is a plan to upstream driver implementation, I think it > is OK to keep ethdev change and wait for driver implementation for > better design instead of rushing it for this release with lower quality > (although target should be to have driver changes with same release with > API changes for future features). Please wait the change until driver is ready. Don't want to deal with API/ABI changes when driver is upstream. Also, no unused code please. ^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [PATCH v1 0/2] add IPv6 extension push remove 2023-07-10 14:41 ` Stephen Hemminger @ 2023-07-11 6:16 ` Thomas Monjalon 2023-09-19 8:12 ` [PATCH v3] net/mlx5: add test for live migration Rongwei Liu 0 siblings, 1 reply; 64+ messages in thread From: Thomas Monjalon @ 2023-07-11 6:16 UTC (permalink / raw) To: Ferruh Yigit, Stephen Hemminger Cc: Rongwei Liu, Ori Kam, Andrew Rybchenko, dev, Matan Azrad, Slava Ovsiienko, Suanming Mou 10/07/2023 16:41, Stephen Hemminger: > On Mon, 10 Jul 2023 09:55:59 +0100 > Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > > On 7/10/2023 3:32 AM, Rongwei Liu wrote: > > > Hi Ferruh & Andrew & Ori & Thomas: > > > Sorry, we can't commit the PMD implementation for "IPv6 extension push remove" feature in time for this release. > > > There are some dis-agreements which need to be addressed internally. > > > We will continue to work on this and plan to push it in the next release. > > > > > > RFC link: https://patchwork.dpdk.org/project/dpdk/cover/20230417022630.2377505-1-rongweil@nvidia.com/ > > > V1 patch with full PMD implementation: https://patchwork.dpdk.org/project/dpdk/cover/20230417092540.2617450-1-rongweil@nvidia.com/ > > > > > > > Hi Rongwei, > > > > Thanks for the heads up. > > As long as there is a plan to upstream driver implementation, I think it > > is OK to keep ethdev change and wait for driver implementation for > > better design instead of rushing it for this release with lower quality > > (although target should be to have driver changes with same release with > > API changes for future features). > > Please wait the change until driver is ready. > Don't want to deal with API/ABI changes when driver is upstream. > Also, no unused code please. There was a driver patch sent in April. It was impossible to imagine it was not good enough to be merged. ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v3] net/mlx5: add test for live migration 2023-07-11 6:16 ` Thomas Monjalon @ 2023-09-19 8:12 ` Rongwei Liu 2023-10-16 8:19 ` Thomas Monjalon 0 siblings, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-09-19 8:12 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas This patch adds testpmd app a runtime function to test the live migration API. testpmd> mlx5 set flow_engine <active|standby> [<flag>] Flag is optional. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> v3: Add missing --in-reply-to option. v2: Change the command prompt from integer to string. --- doc/guides/nics/mlx5.rst | 14 ++++ drivers/net/mlx5/mlx5_testpmd.c | 124 ++++++++++++++++++++++++++++++++ 2 files changed, 138 insertions(+) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 7bee57d9dd..5726e497a8 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -2179,3 +2179,17 @@ where: * ``sw_queue_id``: queue index in range [64536, 65535]. This range is the highest 1000 numbers. * ``hw_queue_id``: queue index given by HW in queue creation. + +Set Flow Engine Mode +~~~~~~~~~~~~~~~~~~~~ + +Set the flow engine to active(0) or standby(1) mode with specific flags:: + +.. code-block:: console + + testpmd> mlx5 set flow_engine <active|standby> [<flags>] + +This command is used for testing live migration and works for +software steering only. +Default FDB jump should be disabled if switchdev is enabled. +The mode will propagate to all the probed ports. diff --git a/drivers/net/mlx5/mlx5_testpmd.c b/drivers/net/mlx5/mlx5_testpmd.c index 879ea2826e..c70a10b3af 100644 --- a/drivers/net/mlx5/mlx5_testpmd.c +++ b/drivers/net/mlx5/mlx5_testpmd.c @@ -25,6 +25,29 @@ static uint8_t host_shaper_avail_thresh_triggered[RTE_MAX_ETHPORTS]; #define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */ +#define PARSE_DELIMITER " \f\n\r\t\v" + +static int +parse_uint(uint64_t *value, const char *str) +{ + char *next = NULL; + uint64_t n; + + errno = 0; + /* Parse number string */ + if (!strncasecmp(str, "0x", 2)) { + str += 2; + n = strtol(str, &next, 16); + } else { + n = strtol(str, &next, 10); + } + if (errno != 0 || str == next || *next != '\0') + return -1; + + *value = n; + + return 0; +} /** * Disable the host shaper and re-arm available descriptor threshold event. @@ -561,6 +584,102 @@ cmdline_parse_inst_t mlx5_cmd_unmap_ext_rxq = { } }; +/* Set flow engine mode with flags command. */ +struct mlx5_cmd_set_flow_engine_mode { + cmdline_fixed_string_t mlx5; + cmdline_fixed_string_t set; + cmdline_fixed_string_t flow_engine; + cmdline_multi_string_t mode; +}; + +static int +parse_multi_token_flow_engine_mode(char *t_str, enum mlx5_flow_engine_mode *mode, + uint32_t *flag) +{ + uint64_t val; + char *token; + int ret; + + *flag = 0; + /* First token: mode string */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return -1; + + if (!strcmp(token, "active")) + *mode = MLX5_FLOW_ENGINE_MODE_ACTIVE; + else if (!strcmp(token, "standby")) + *mode = MLX5_FLOW_ENGINE_MODE_STANDBY; + else + return -1; + + /* Second token: flag */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return 0; + + ret = parse_uint(&val, token); + if (ret != 0 || val > UINT32_MAX) + return -1; + + *flag = val; + return 0; +} + +static void +mlx5_cmd_set_flow_engine_mode_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct mlx5_cmd_set_flow_engine_mode *res = parsed_result; + enum mlx5_flow_engine_mode mode; + uint32_t flag; + int ret; + + ret = parse_multi_token_flow_engine_mode(res->mode, &mode, &flag); + + if (ret < 0) { + fprintf(stderr, "Bad input\n"); + return; + } + + ret = rte_pmd_mlx5_flow_engine_set_mode(mode, flag); + + if (ret < 0) + fprintf(stderr, "Fail to set flow_engine to %s mode with flag 0x%x, error %s\n", + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag, + strerror(-ret)); + else + TESTPMD_LOG(DEBUG, "Set %d ports flow_engine to %s mode with flag 0x%x\n", ret, + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag); +} + +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mlx5 = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mlx5, + "mlx5"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_set = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, set, + "set"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_flow_engine = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, flow_engine, + "flow_engine"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mode = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mode, + TOKEN_STRING_MULTI); + +cmdline_parse_inst_t mlx5_cmd_set_flow_engine_mode = { + .f = &mlx5_cmd_set_flow_engine_mode_parsed, + .data = NULL, + .help_str = "mlx5 set flow_engine <active|standby> [<flag>]", + .tokens = { + (void *)&mlx5_cmd_set_flow_engine_mode_mlx5, + (void *)&mlx5_cmd_set_flow_engine_mode_set, + (void *)&mlx5_cmd_set_flow_engine_mode_flow_engine, + (void *)&mlx5_cmd_set_flow_engine_mode_mode, + NULL, + } +}; + static struct testpmd_driver_commands mlx5_driver_cmds = { .commands = { { @@ -588,6 +707,11 @@ static struct testpmd_driver_commands mlx5_driver_cmds = { .help = "mlx5 port (port_id) ext_rxq unmap (sw_queue_id)\n" " Unmap external Rx queue ethdev index mapping\n\n", }, + { + .ctx = &mlx5_cmd_set_flow_engine_mode, + .help = "mlx5 set flow_engine (active|standby) [(flag)]\n" + " Set flow_engine to the specific mode with flag.\n\n" + }, { .ctx = NULL, }, -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [PATCH v3] net/mlx5: add test for live migration 2023-09-19 8:12 ` [PATCH v3] net/mlx5: add test for live migration Rongwei Liu @ 2023-10-16 8:19 ` Thomas Monjalon 2023-10-16 8:25 ` Rongwei Liu 2023-10-16 9:22 ` [PATCH v4] " Rongwei Liu 0 siblings, 2 replies; 64+ messages in thread From: Thomas Monjalon @ 2023-10-16 8:19 UTC (permalink / raw) To: Rongwei Liu; +Cc: dev, matan, viacheslavo, orika, suanmingm, rasland 19/09/2023 10:12, Rongwei Liu: > This patch adds testpmd app a runtime function to test the live > migration API. > > testpmd> mlx5 set flow_engine <active|standby> [<flag>] > Flag is optional. > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> > Acked-by: Ori Kam <orika@nvidia.com> [...] > +Set Flow Engine Mode > +~~~~~~~~~~~~~~~~~~~~ > + > +Set the flow engine to active(0) or standby(1) mode with specific flags:: Need space before brackets. > + > +.. code-block:: console > + > + testpmd> mlx5 set flow_engine <active|standby> [<flags>] What are the flags? > + > +This command is used for testing live migration and works for > +software steering only. > +Default FDB jump should be disabled if switchdev is enabled. > +The mode will propagate to all the probed ports. Looks OK. ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v3] net/mlx5: add test for live migration 2023-10-16 8:19 ` Thomas Monjalon @ 2023-10-16 8:25 ` Rongwei Liu 2023-10-16 9:26 ` Rongwei Liu 2023-10-16 9:26 ` Thomas Monjalon 2023-10-16 9:22 ` [PATCH v4] " Rongwei Liu 1 sibling, 2 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-16 8:25 UTC (permalink / raw) To: NBU-Contact-Thomas Monjalon (EXTERNAL) Cc: dev, Matan Azrad, Slava Ovsiienko, Ori Kam, Suanming Mou, Raslan Darawsheh HI BR Rongwei > -----Original Message----- > From: Thomas Monjalon <thomas@monjalon.net> > Sent: Monday, October 16, 2023 16:20 > To: Rongwei Liu <rongweil@nvidia.com> > Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > <suanmingm@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com> > Subject: Re: [PATCH v3] net/mlx5: add test for live migration > > External email: Use caution opening links or attachments > > > 19/09/2023 10:12, Rongwei Liu: > > This patch adds testpmd app a runtime function to test the live > > migration API. > > > > testpmd> mlx5 set flow_engine <active|standby> [<flag>] Flag is > > optional. > > > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com> > > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> > > Acked-by: Ori Kam <orika@nvidia.com> > [...] > > +Set Flow Engine Mode > > +~~~~~~~~~~~~~~~~~~~~ > > + > > +Set the flow engine to active(0) or standby(1) mode with specific flags:: > > Need space before brackets. > Sure. > > + > > +.. code-block:: console > > + > > + testpmd> mlx5 set flow_engine <active|standby> [<flags>] > > What are the flags? > The flag is optional and defined a as bitmap. For now, only one value is accepted: BIT(0). I don't have any idea to propagate the value definition list here. Any suggestions? > > + > > +This command is used for testing live migration and works for > > +software steering only. > > +Default FDB jump should be disabled if switchdev is enabled. > > +The mode will propagate to all the probed ports. > > Looks OK. > ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v3] net/mlx5: add test for live migration 2023-10-16 8:25 ` Rongwei Liu @ 2023-10-16 9:26 ` Rongwei Liu 2023-10-16 9:26 ` Thomas Monjalon 1 sibling, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-16 9:26 UTC (permalink / raw) To: NBU-Contact-Thomas Monjalon (EXTERNAL) Cc: dev, Matan Azrad, Slava Ovsiienko, Ori Kam, Suanming Mou, Raslan Darawsheh BR Rongwei > -----Original Message----- > From: Rongwei Liu <rongweil@nvidia.com> > Sent: Monday, October 16, 2023 16:26 > To: Thomas Monjalon <thomas@monjalon.net> > Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > <suanmingm@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com> > Subject: RE: [PATCH v3] net/mlx5: add test for live migration > > HI > > BR > Rongwei > > > -----Original Message----- > > From: Thomas Monjalon <thomas@monjalon.net> > > Sent: Monday, October 16, 2023 16:20 > > To: Rongwei Liu <rongweil@nvidia.com> > > Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > > <suanmingm@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com> > > Subject: Re: [PATCH v3] net/mlx5: add test for live migration > > > > External email: Use caution opening links or attachments > > > > > > 19/09/2023 10:12, Rongwei Liu: > > > This patch adds testpmd app a runtime function to test the live > > > migration API. > > > > > > testpmd> mlx5 set flow_engine <active|standby> [<flag>] Flag is > > > optional. > > > > > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com> > > > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> > > > Acked-by: Ori Kam <orika@nvidia.com> > > [...] > > > +Set Flow Engine Mode > > > +~~~~~~~~~~~~~~~~~~~~ > > > + > > > +Set the flow engine to active(0) or standby(1) mode with specific flags:: > > > > Need space before brackets. > > > Sure. > > > + > > > +.. code-block:: console > > > + > > > + testpmd> mlx5 set flow_engine <active|standby> [<flags>] > > > > What are the flags? > > > The flag is optional and defined a as bitmap. > For now, only one value is accepted: BIT(0). > I don't have any idea to propagate the value definition list here. Any > suggestions? Add one more description to mention it in bitmap style. > > > + > > > +This command is used for testing live migration and works for > > > +software steering only. > > > +Default FDB jump should be disabled if switchdev is enabled. > > > +The mode will propagate to all the probed ports. > > > > Looks OK. > > ^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [PATCH v3] net/mlx5: add test for live migration 2023-10-16 8:25 ` Rongwei Liu 2023-10-16 9:26 ` Rongwei Liu @ 2023-10-16 9:26 ` Thomas Monjalon 2023-10-16 9:29 ` Rongwei Liu 1 sibling, 1 reply; 64+ messages in thread From: Thomas Monjalon @ 2023-10-16 9:26 UTC (permalink / raw) To: Rongwei Liu Cc: dev, Matan Azrad, Slava Ovsiienko, Ori Kam, Suanming Mou, Raslan Darawsheh 16/10/2023 10:25, Rongwei Liu: > From: Thomas Monjalon <thomas@monjalon.net> > > 19/09/2023 10:12, Rongwei Liu: > > > + testpmd> mlx5 set flow_engine <active|standby> [<flags>] > > > > What are the flags? > > > The flag is optional and defined a as bitmap. > For now, only one value is accepted: BIT(0). > I don't have any idea to propagate the value definition list here. Any suggestions? Just add it and give the usage of the flag or refer to another part of the doc for explanation. ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v3] net/mlx5: add test for live migration 2023-10-16 9:26 ` Thomas Monjalon @ 2023-10-16 9:29 ` Rongwei Liu 2023-10-25 9:07 ` Rongwei Liu 0 siblings, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-10-16 9:29 UTC (permalink / raw) To: NBU-Contact-Thomas Monjalon (EXTERNAL) Cc: dev, Matan Azrad, Slava Ovsiienko, Ori Kam, Suanming Mou, Raslan Darawsheh Hi BR Rongwei > -----Original Message----- > From: Thomas Monjalon <thomas@monjalon.net> > Sent: Monday, October 16, 2023 17:27 > To: Rongwei Liu <rongweil@nvidia.com> > Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > <suanmingm@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com> > Subject: Re: [PATCH v3] net/mlx5: add test for live migration > > External email: Use caution opening links or attachments > > > 16/10/2023 10:25, Rongwei Liu: > > From: Thomas Monjalon <thomas@monjalon.net> > > > 19/09/2023 10:12, Rongwei Liu: > > > > + testpmd> mlx5 set flow_engine <active|standby> [<flags>] > > > > > > What are the flags? > > > > > The flag is optional and defined a as bitmap. > > For now, only one value is accepted: BIT(0). > > I don't have any idea to propagate the value definition list here. Any > suggestions? > > Just add it and give the usage of the flag or refer to another part of the doc for > explanation. Change it as: " Set the flow engine to active or standby mode with specific flags (bitmap style)::" Sound good? > ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v3] net/mlx5: add test for live migration 2023-10-16 9:29 ` Rongwei Liu @ 2023-10-25 9:07 ` Rongwei Liu 0 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-25 9:07 UTC (permalink / raw) To: Rongwei Liu, NBU-Contact-Thomas Monjalon (EXTERNAL) Cc: dev, Matan Azrad, Slava Ovsiienko, Ori Kam, Suanming Mou, Raslan Darawsheh BR Rongwei > -----Original Message----- > From: Rongwei Liu <rongweil@nvidia.com> > Sent: Monday, October 16, 2023 17:30 > To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net> > Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > <suanmingm@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com> > Subject: RE: [PATCH v3] net/mlx5: add test for live migration > > External email: Use caution opening links or attachments > > > Hi > > BR > Rongwei > > > -----Original Message----- > > From: Thomas Monjalon <thomas@monjalon.net> > > Sent: Monday, October 16, 2023 17:27 > > To: Rongwei Liu <rongweil@nvidia.com> > > Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > > <suanmingm@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com> > > Subject: Re: [PATCH v3] net/mlx5: add test for live migration > > > > External email: Use caution opening links or attachments > > > > > > 16/10/2023 10:25, Rongwei Liu: > > > From: Thomas Monjalon <thomas@monjalon.net> > > > > 19/09/2023 10:12, Rongwei Liu: > > > > > + testpmd> mlx5 set flow_engine <active|standby> [<flags>] > > > > > > > > What are the flags? > > > > > > > The flag is optional and defined a as bitmap. > > > For now, only one value is accepted: BIT(0). > > > I don't have any idea to propagate the value definition list here. > > > Any > > suggestions? > > > > Just add it and give the usage of the flag or refer to another part of > > the doc for explanation. > Change it as: " Set the flow engine to active or standby mode with specific > flags (bitmap style)::" > Sound good? > > @NBU-Contact-Thomas Monjalon (EXTERNAL) are we good to move forward? Thanks ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4] net/mlx5: add test for live migration 2023-10-16 8:19 ` Thomas Monjalon 2023-10-16 8:25 ` Rongwei Liu @ 2023-10-16 9:22 ` Rongwei Liu 2023-10-25 9:36 ` [PATCH v5] " Rongwei Liu 1 sibling, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-10-16 9:22 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas This patch adds testpmd app a runtime function to test the live migration API. testpmd> mlx5 set flow_engine <active|standby> [<flag>] Flag is optional and in bitmap style. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- doc/guides/nics/mlx5.rst | 14 ++++ drivers/net/mlx5/mlx5_testpmd.c | 124 ++++++++++++++++++++++++++++++++ 2 files changed, 138 insertions(+) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 7bee57d9dd..5921d40e17 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -2179,3 +2179,17 @@ where: * ``sw_queue_id``: queue index in range [64536, 65535]. This range is the highest 1000 numbers. * ``hw_queue_id``: queue index given by HW in queue creation. + +Set Flow Engine Mode +~~~~~~~~~~~~~~~~~~~~ + +Set the flow engine to active or standby mode with specific flags (bitmap style):: + +.. code-block:: console + + testpmd> mlx5 set flow_engine <active|standby> [<flags>] + +This command is used for testing live migration and works for +software steering only. +Default FDB jump should be disabled if switchdev is enabled. +The mode will propagate to all the probed ports. diff --git a/drivers/net/mlx5/mlx5_testpmd.c b/drivers/net/mlx5/mlx5_testpmd.c index 879ea2826e..c70a10b3af 100644 --- a/drivers/net/mlx5/mlx5_testpmd.c +++ b/drivers/net/mlx5/mlx5_testpmd.c @@ -25,6 +25,29 @@ static uint8_t host_shaper_avail_thresh_triggered[RTE_MAX_ETHPORTS]; #define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */ +#define PARSE_DELIMITER " \f\n\r\t\v" + +static int +parse_uint(uint64_t *value, const char *str) +{ + char *next = NULL; + uint64_t n; + + errno = 0; + /* Parse number string */ + if (!strncasecmp(str, "0x", 2)) { + str += 2; + n = strtol(str, &next, 16); + } else { + n = strtol(str, &next, 10); + } + if (errno != 0 || str == next || *next != '\0') + return -1; + + *value = n; + + return 0; +} /** * Disable the host shaper and re-arm available descriptor threshold event. @@ -561,6 +584,102 @@ cmdline_parse_inst_t mlx5_cmd_unmap_ext_rxq = { } }; +/* Set flow engine mode with flags command. */ +struct mlx5_cmd_set_flow_engine_mode { + cmdline_fixed_string_t mlx5; + cmdline_fixed_string_t set; + cmdline_fixed_string_t flow_engine; + cmdline_multi_string_t mode; +}; + +static int +parse_multi_token_flow_engine_mode(char *t_str, enum mlx5_flow_engine_mode *mode, + uint32_t *flag) +{ + uint64_t val; + char *token; + int ret; + + *flag = 0; + /* First token: mode string */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return -1; + + if (!strcmp(token, "active")) + *mode = MLX5_FLOW_ENGINE_MODE_ACTIVE; + else if (!strcmp(token, "standby")) + *mode = MLX5_FLOW_ENGINE_MODE_STANDBY; + else + return -1; + + /* Second token: flag */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return 0; + + ret = parse_uint(&val, token); + if (ret != 0 || val > UINT32_MAX) + return -1; + + *flag = val; + return 0; +} + +static void +mlx5_cmd_set_flow_engine_mode_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct mlx5_cmd_set_flow_engine_mode *res = parsed_result; + enum mlx5_flow_engine_mode mode; + uint32_t flag; + int ret; + + ret = parse_multi_token_flow_engine_mode(res->mode, &mode, &flag); + + if (ret < 0) { + fprintf(stderr, "Bad input\n"); + return; + } + + ret = rte_pmd_mlx5_flow_engine_set_mode(mode, flag); + + if (ret < 0) + fprintf(stderr, "Fail to set flow_engine to %s mode with flag 0x%x, error %s\n", + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag, + strerror(-ret)); + else + TESTPMD_LOG(DEBUG, "Set %d ports flow_engine to %s mode with flag 0x%x\n", ret, + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag); +} + +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mlx5 = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mlx5, + "mlx5"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_set = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, set, + "set"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_flow_engine = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, flow_engine, + "flow_engine"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mode = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mode, + TOKEN_STRING_MULTI); + +cmdline_parse_inst_t mlx5_cmd_set_flow_engine_mode = { + .f = &mlx5_cmd_set_flow_engine_mode_parsed, + .data = NULL, + .help_str = "mlx5 set flow_engine <active|standby> [<flag>]", + .tokens = { + (void *)&mlx5_cmd_set_flow_engine_mode_mlx5, + (void *)&mlx5_cmd_set_flow_engine_mode_set, + (void *)&mlx5_cmd_set_flow_engine_mode_flow_engine, + (void *)&mlx5_cmd_set_flow_engine_mode_mode, + NULL, + } +}; + static struct testpmd_driver_commands mlx5_driver_cmds = { .commands = { { @@ -588,6 +707,11 @@ static struct testpmd_driver_commands mlx5_driver_cmds = { .help = "mlx5 port (port_id) ext_rxq unmap (sw_queue_id)\n" " Unmap external Rx queue ethdev index mapping\n\n", }, + { + .ctx = &mlx5_cmd_set_flow_engine_mode, + .help = "mlx5 set flow_engine (active|standby) [(flag)]\n" + " Set flow_engine to the specific mode with flag.\n\n" + }, { .ctx = NULL, }, -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v5] net/mlx5: add test for live migration 2023-10-16 9:22 ` [PATCH v4] " Rongwei Liu @ 2023-10-25 9:36 ` Rongwei Liu 2023-10-25 9:41 ` Thomas Monjalon 0 siblings, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-10-25 9:36 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas This patch adds testpmd app a runtime function to test the live migration API. testpmd> mlx5 set flow_engine <active|standby> [<flag>] Flag is optional. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- doc/guides/nics/mlx5.rst | 15 ++++ drivers/net/mlx5/mlx5_testpmd.c | 124 ++++++++++++++++++++++++++++++++ 2 files changed, 139 insertions(+) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 9039b55c0b..412a967c68 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -2187,3 +2187,18 @@ where: * ``sw_queue_id``: queue index in range [64536, 65535]. This range is the highest 1000 numbers. * ``hw_queue_id``: queue index given by HW in queue creation. + +Set Flow Engine Mode +~~~~~~~~~~~~~~~~~~~~ + +Set the flow engine to active or standby mode with specific flags (bitmap style):: +See MLX5_FLOW_ENGINE_FLAG_* for the detailed flags definitions. + +.. code-block:: console + + testpmd> mlx5 set flow_engine <active|standby> [<flags>] + +This command is used for testing live migration and works for +software steering only. +Default FDB jump should be disabled if switchdev is enabled. +The mode will propagate to all the probed ports. diff --git a/drivers/net/mlx5/mlx5_testpmd.c b/drivers/net/mlx5/mlx5_testpmd.c index 879ea2826e..c70a10b3af 100644 --- a/drivers/net/mlx5/mlx5_testpmd.c +++ b/drivers/net/mlx5/mlx5_testpmd.c @@ -25,6 +25,29 @@ static uint8_t host_shaper_avail_thresh_triggered[RTE_MAX_ETHPORTS]; #define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */ +#define PARSE_DELIMITER " \f\n\r\t\v" + +static int +parse_uint(uint64_t *value, const char *str) +{ + char *next = NULL; + uint64_t n; + + errno = 0; + /* Parse number string */ + if (!strncasecmp(str, "0x", 2)) { + str += 2; + n = strtol(str, &next, 16); + } else { + n = strtol(str, &next, 10); + } + if (errno != 0 || str == next || *next != '\0') + return -1; + + *value = n; + + return 0; +} /** * Disable the host shaper and re-arm available descriptor threshold event. @@ -561,6 +584,102 @@ cmdline_parse_inst_t mlx5_cmd_unmap_ext_rxq = { } }; +/* Set flow engine mode with flags command. */ +struct mlx5_cmd_set_flow_engine_mode { + cmdline_fixed_string_t mlx5; + cmdline_fixed_string_t set; + cmdline_fixed_string_t flow_engine; + cmdline_multi_string_t mode; +}; + +static int +parse_multi_token_flow_engine_mode(char *t_str, enum mlx5_flow_engine_mode *mode, + uint32_t *flag) +{ + uint64_t val; + char *token; + int ret; + + *flag = 0; + /* First token: mode string */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return -1; + + if (!strcmp(token, "active")) + *mode = MLX5_FLOW_ENGINE_MODE_ACTIVE; + else if (!strcmp(token, "standby")) + *mode = MLX5_FLOW_ENGINE_MODE_STANDBY; + else + return -1; + + /* Second token: flag */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return 0; + + ret = parse_uint(&val, token); + if (ret != 0 || val > UINT32_MAX) + return -1; + + *flag = val; + return 0; +} + +static void +mlx5_cmd_set_flow_engine_mode_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct mlx5_cmd_set_flow_engine_mode *res = parsed_result; + enum mlx5_flow_engine_mode mode; + uint32_t flag; + int ret; + + ret = parse_multi_token_flow_engine_mode(res->mode, &mode, &flag); + + if (ret < 0) { + fprintf(stderr, "Bad input\n"); + return; + } + + ret = rte_pmd_mlx5_flow_engine_set_mode(mode, flag); + + if (ret < 0) + fprintf(stderr, "Fail to set flow_engine to %s mode with flag 0x%x, error %s\n", + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag, + strerror(-ret)); + else + TESTPMD_LOG(DEBUG, "Set %d ports flow_engine to %s mode with flag 0x%x\n", ret, + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag); +} + +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mlx5 = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mlx5, + "mlx5"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_set = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, set, + "set"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_flow_engine = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, flow_engine, + "flow_engine"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mode = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mode, + TOKEN_STRING_MULTI); + +cmdline_parse_inst_t mlx5_cmd_set_flow_engine_mode = { + .f = &mlx5_cmd_set_flow_engine_mode_parsed, + .data = NULL, + .help_str = "mlx5 set flow_engine <active|standby> [<flag>]", + .tokens = { + (void *)&mlx5_cmd_set_flow_engine_mode_mlx5, + (void *)&mlx5_cmd_set_flow_engine_mode_set, + (void *)&mlx5_cmd_set_flow_engine_mode_flow_engine, + (void *)&mlx5_cmd_set_flow_engine_mode_mode, + NULL, + } +}; + static struct testpmd_driver_commands mlx5_driver_cmds = { .commands = { { @@ -588,6 +707,11 @@ static struct testpmd_driver_commands mlx5_driver_cmds = { .help = "mlx5 port (port_id) ext_rxq unmap (sw_queue_id)\n" " Unmap external Rx queue ethdev index mapping\n\n", }, + { + .ctx = &mlx5_cmd_set_flow_engine_mode, + .help = "mlx5 set flow_engine (active|standby) [(flag)]\n" + " Set flow_engine to the specific mode with flag.\n\n" + }, { .ctx = NULL, }, -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [PATCH v5] net/mlx5: add test for live migration 2023-10-25 9:36 ` [PATCH v5] " Rongwei Liu @ 2023-10-25 9:41 ` Thomas Monjalon 2023-10-25 9:45 ` [PATCH v6] " Rongwei Liu ` (2 more replies) 0 siblings, 3 replies; 64+ messages in thread From: Thomas Monjalon @ 2023-10-25 9:41 UTC (permalink / raw) To: Rongwei Liu; +Cc: dev, matan, viacheslavo, orika, suanmingm 25/10/2023 11:36, Rongwei Liu: > +Set Flow Engine Mode > +~~~~~~~~~~~~~~~~~~~~ > + > +Set the flow engine to active or standby mode with specific flags (bitmap style):: This sentence should end with a dot. > +See MLX5_FLOW_ENGINE_FLAG_* for the detailed flags definitions. MLX5_FLOW_ENGINE_FLAG_* should be between backquotes: ``MLX5_FLOW_ENGINE_FLAG_*`` No need "s" to "flags". You can also remove "detailed": "flag definitions". > + > +.. code-block:: console > + > + testpmd> mlx5 set flow_engine <active|standby> [<flags>] > + > +This command is used for testing live migration and works for > +software steering only. > +Default FDB jump should be disabled if switchdev is enabled. > +The mode will propagate to all the probed ports. ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v6] net/mlx5: add test for live migration 2023-10-25 9:41 ` Thomas Monjalon @ 2023-10-25 9:45 ` Rongwei Liu 2023-10-25 9:48 ` [PATCH v5] " Rongwei Liu 2023-10-25 9:50 ` [PATCH v7] " Rongwei Liu 2 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-25 9:45 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas This patch adds testpmd app a runtime function to test the live migration API. testpmd> mlx5 set flow_engine <active|standby> [<flag>] Flag is optional. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- doc/guides/nics/mlx5.rst | 15 ++++ drivers/net/mlx5/mlx5_testpmd.c | 124 ++++++++++++++++++++++++++++++++ 2 files changed, 139 insertions(+) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 9039b55c0b..aca51f0928 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -2187,3 +2187,18 @@ where: * ``sw_queue_id``: queue index in range [64536, 65535]. This range is the highest 1000 numbers. * ``hw_queue_id``: queue index given by HW in queue creation. + +Set Flow Engine Mode +~~~~~~~~~~~~~~~~~~~~ + +Set the flow engine to active or standby mode with specific flags (bitmap style):: +See MLX5_FLOW_ENGINE_FLAG_* for the flag definitions. + +.. code-block:: console + + testpmd> mlx5 set flow_engine <active|standby> [<flags>] + +This command is used for testing live migration and works for +software steering only. +Default FDB jump should be disabled if switchdev is enabled. +The mode will propagate to all the probed ports. diff --git a/drivers/net/mlx5/mlx5_testpmd.c b/drivers/net/mlx5/mlx5_testpmd.c index 879ea2826e..c70a10b3af 100644 --- a/drivers/net/mlx5/mlx5_testpmd.c +++ b/drivers/net/mlx5/mlx5_testpmd.c @@ -25,6 +25,29 @@ static uint8_t host_shaper_avail_thresh_triggered[RTE_MAX_ETHPORTS]; #define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */ +#define PARSE_DELIMITER " \f\n\r\t\v" + +static int +parse_uint(uint64_t *value, const char *str) +{ + char *next = NULL; + uint64_t n; + + errno = 0; + /* Parse number string */ + if (!strncasecmp(str, "0x", 2)) { + str += 2; + n = strtol(str, &next, 16); + } else { + n = strtol(str, &next, 10); + } + if (errno != 0 || str == next || *next != '\0') + return -1; + + *value = n; + + return 0; +} /** * Disable the host shaper and re-arm available descriptor threshold event. @@ -561,6 +584,102 @@ cmdline_parse_inst_t mlx5_cmd_unmap_ext_rxq = { } }; +/* Set flow engine mode with flags command. */ +struct mlx5_cmd_set_flow_engine_mode { + cmdline_fixed_string_t mlx5; + cmdline_fixed_string_t set; + cmdline_fixed_string_t flow_engine; + cmdline_multi_string_t mode; +}; + +static int +parse_multi_token_flow_engine_mode(char *t_str, enum mlx5_flow_engine_mode *mode, + uint32_t *flag) +{ + uint64_t val; + char *token; + int ret; + + *flag = 0; + /* First token: mode string */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return -1; + + if (!strcmp(token, "active")) + *mode = MLX5_FLOW_ENGINE_MODE_ACTIVE; + else if (!strcmp(token, "standby")) + *mode = MLX5_FLOW_ENGINE_MODE_STANDBY; + else + return -1; + + /* Second token: flag */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return 0; + + ret = parse_uint(&val, token); + if (ret != 0 || val > UINT32_MAX) + return -1; + + *flag = val; + return 0; +} + +static void +mlx5_cmd_set_flow_engine_mode_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct mlx5_cmd_set_flow_engine_mode *res = parsed_result; + enum mlx5_flow_engine_mode mode; + uint32_t flag; + int ret; + + ret = parse_multi_token_flow_engine_mode(res->mode, &mode, &flag); + + if (ret < 0) { + fprintf(stderr, "Bad input\n"); + return; + } + + ret = rte_pmd_mlx5_flow_engine_set_mode(mode, flag); + + if (ret < 0) + fprintf(stderr, "Fail to set flow_engine to %s mode with flag 0x%x, error %s\n", + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag, + strerror(-ret)); + else + TESTPMD_LOG(DEBUG, "Set %d ports flow_engine to %s mode with flag 0x%x\n", ret, + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag); +} + +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mlx5 = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mlx5, + "mlx5"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_set = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, set, + "set"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_flow_engine = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, flow_engine, + "flow_engine"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mode = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mode, + TOKEN_STRING_MULTI); + +cmdline_parse_inst_t mlx5_cmd_set_flow_engine_mode = { + .f = &mlx5_cmd_set_flow_engine_mode_parsed, + .data = NULL, + .help_str = "mlx5 set flow_engine <active|standby> [<flag>]", + .tokens = { + (void *)&mlx5_cmd_set_flow_engine_mode_mlx5, + (void *)&mlx5_cmd_set_flow_engine_mode_set, + (void *)&mlx5_cmd_set_flow_engine_mode_flow_engine, + (void *)&mlx5_cmd_set_flow_engine_mode_mode, + NULL, + } +}; + static struct testpmd_driver_commands mlx5_driver_cmds = { .commands = { { @@ -588,6 +707,11 @@ static struct testpmd_driver_commands mlx5_driver_cmds = { .help = "mlx5 port (port_id) ext_rxq unmap (sw_queue_id)\n" " Unmap external Rx queue ethdev index mapping\n\n", }, + { + .ctx = &mlx5_cmd_set_flow_engine_mode, + .help = "mlx5 set flow_engine (active|standby) [(flag)]\n" + " Set flow_engine to the specific mode with flag.\n\n" + }, { .ctx = NULL, }, -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v5] net/mlx5: add test for live migration 2023-10-25 9:41 ` Thomas Monjalon 2023-10-25 9:45 ` [PATCH v6] " Rongwei Liu @ 2023-10-25 9:48 ` Rongwei Liu 2023-10-25 9:50 ` [PATCH v7] " Rongwei Liu 2 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-25 9:48 UTC (permalink / raw) To: NBU-Contact-Thomas Monjalon (EXTERNAL) Cc: dev, Matan Azrad, Slava Ovsiienko, Ori Kam, Suanming Mou BR Rongwei > -----Original Message----- > From: Thomas Monjalon <thomas@monjalon.net> > Sent: Wednesday, October 25, 2023 17:42 > To: Rongwei Liu <rongweil@nvidia.com> > Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > <suanmingm@nvidia.com> > Subject: Re: [PATCH v5] net/mlx5: add test for live migration > > External email: Use caution opening links or attachments > > > 25/10/2023 11:36, Rongwei Liu: > > +Set Flow Engine Mode > > +~~~~~~~~~~~~~~~~~~~~ > > + > > +Set the flow engine to active or standby mode with specific flags (bitmap > style):: > > This sentence should end with a dot. > Sure. > > +See MLX5_FLOW_ENGINE_FLAG_* for the detailed flags definitions. > > MLX5_FLOW_ENGINE_FLAG_* should be between backquotes: > ``MLX5_FLOW_ENGINE_FLAG_*`` > Sure. > No need "s" to "flags". > You can also remove "detailed": "flag definitions". > Sure. > > + > > +.. code-block:: console > > + > > + testpmd> mlx5 set flow_engine <active|standby> [<flags>] > > + > > +This command is used for testing live migration and works for > > +software steering only. > > +Default FDB jump should be disabled if switchdev is enabled. > > +The mode will propagate to all the probed ports. > > ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v7] net/mlx5: add test for live migration 2023-10-25 9:41 ` Thomas Monjalon 2023-10-25 9:45 ` [PATCH v6] " Rongwei Liu 2023-10-25 9:48 ` [PATCH v5] " Rongwei Liu @ 2023-10-25 9:50 ` Rongwei Liu 2023-10-25 13:10 ` Thomas Monjalon 2023-10-26 8:15 ` Raslan Darawsheh 2 siblings, 2 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-25 9:50 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas This patch adds testpmd app a runtime function to test the live migration API. testpmd> mlx5 set flow_engine <active|standby> [<flag>] Flag is optional. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- doc/guides/nics/mlx5.rst | 15 ++++ drivers/net/mlx5/mlx5_testpmd.c | 124 ++++++++++++++++++++++++++++++++ 2 files changed, 139 insertions(+) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 9039b55c0b..8bfe1e6efd 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -2187,3 +2187,18 @@ where: * ``sw_queue_id``: queue index in range [64536, 65535]. This range is the highest 1000 numbers. * ``hw_queue_id``: queue index given by HW in queue creation. + +Set Flow Engine Mode +~~~~~~~~~~~~~~~~~~~~ + +Set the flow engine to active or standby mode with specific flags (bitmap style). +See ``MLX5_FLOW_ENGINE_FLAG_*`` for the flag definitions. + +.. code-block:: console + + testpmd> mlx5 set flow_engine <active|standby> [<flags>] + +This command is used for testing live migration and works for +software steering only. +Default FDB jump should be disabled if switchdev is enabled. +The mode will propagate to all the probed ports. diff --git a/drivers/net/mlx5/mlx5_testpmd.c b/drivers/net/mlx5/mlx5_testpmd.c index 879ea2826e..c70a10b3af 100644 --- a/drivers/net/mlx5/mlx5_testpmd.c +++ b/drivers/net/mlx5/mlx5_testpmd.c @@ -25,6 +25,29 @@ static uint8_t host_shaper_avail_thresh_triggered[RTE_MAX_ETHPORTS]; #define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */ +#define PARSE_DELIMITER " \f\n\r\t\v" + +static int +parse_uint(uint64_t *value, const char *str) +{ + char *next = NULL; + uint64_t n; + + errno = 0; + /* Parse number string */ + if (!strncasecmp(str, "0x", 2)) { + str += 2; + n = strtol(str, &next, 16); + } else { + n = strtol(str, &next, 10); + } + if (errno != 0 || str == next || *next != '\0') + return -1; + + *value = n; + + return 0; +} /** * Disable the host shaper and re-arm available descriptor threshold event. @@ -561,6 +584,102 @@ cmdline_parse_inst_t mlx5_cmd_unmap_ext_rxq = { } }; +/* Set flow engine mode with flags command. */ +struct mlx5_cmd_set_flow_engine_mode { + cmdline_fixed_string_t mlx5; + cmdline_fixed_string_t set; + cmdline_fixed_string_t flow_engine; + cmdline_multi_string_t mode; +}; + +static int +parse_multi_token_flow_engine_mode(char *t_str, enum mlx5_flow_engine_mode *mode, + uint32_t *flag) +{ + uint64_t val; + char *token; + int ret; + + *flag = 0; + /* First token: mode string */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return -1; + + if (!strcmp(token, "active")) + *mode = MLX5_FLOW_ENGINE_MODE_ACTIVE; + else if (!strcmp(token, "standby")) + *mode = MLX5_FLOW_ENGINE_MODE_STANDBY; + else + return -1; + + /* Second token: flag */ + token = strtok_r(t_str, PARSE_DELIMITER, &t_str); + if (token == NULL) + return 0; + + ret = parse_uint(&val, token); + if (ret != 0 || val > UINT32_MAX) + return -1; + + *flag = val; + return 0; +} + +static void +mlx5_cmd_set_flow_engine_mode_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct mlx5_cmd_set_flow_engine_mode *res = parsed_result; + enum mlx5_flow_engine_mode mode; + uint32_t flag; + int ret; + + ret = parse_multi_token_flow_engine_mode(res->mode, &mode, &flag); + + if (ret < 0) { + fprintf(stderr, "Bad input\n"); + return; + } + + ret = rte_pmd_mlx5_flow_engine_set_mode(mode, flag); + + if (ret < 0) + fprintf(stderr, "Fail to set flow_engine to %s mode with flag 0x%x, error %s\n", + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag, + strerror(-ret)); + else + TESTPMD_LOG(DEBUG, "Set %d ports flow_engine to %s mode with flag 0x%x\n", ret, + mode == MLX5_FLOW_ENGINE_MODE_ACTIVE ? "active" : "standby", flag); +} + +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mlx5 = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mlx5, + "mlx5"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_set = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, set, + "set"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_flow_engine = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, flow_engine, + "flow_engine"); +cmdline_parse_token_string_t mlx5_cmd_set_flow_engine_mode_mode = + TOKEN_STRING_INITIALIZER(struct mlx5_cmd_set_flow_engine_mode, mode, + TOKEN_STRING_MULTI); + +cmdline_parse_inst_t mlx5_cmd_set_flow_engine_mode = { + .f = &mlx5_cmd_set_flow_engine_mode_parsed, + .data = NULL, + .help_str = "mlx5 set flow_engine <active|standby> [<flag>]", + .tokens = { + (void *)&mlx5_cmd_set_flow_engine_mode_mlx5, + (void *)&mlx5_cmd_set_flow_engine_mode_set, + (void *)&mlx5_cmd_set_flow_engine_mode_flow_engine, + (void *)&mlx5_cmd_set_flow_engine_mode_mode, + NULL, + } +}; + static struct testpmd_driver_commands mlx5_driver_cmds = { .commands = { { @@ -588,6 +707,11 @@ static struct testpmd_driver_commands mlx5_driver_cmds = { .help = "mlx5 port (port_id) ext_rxq unmap (sw_queue_id)\n" " Unmap external Rx queue ethdev index mapping\n\n", }, + { + .ctx = &mlx5_cmd_set_flow_engine_mode, + .help = "mlx5 set flow_engine (active|standby) [(flag)]\n" + " Set flow_engine to the specific mode with flag.\n\n" + }, { .ctx = NULL, }, -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [PATCH v7] net/mlx5: add test for live migration 2023-10-25 9:50 ` [PATCH v7] " Rongwei Liu @ 2023-10-25 13:10 ` Thomas Monjalon 2023-10-26 8:15 ` Raslan Darawsheh 1 sibling, 0 replies; 64+ messages in thread From: Thomas Monjalon @ 2023-10-25 13:10 UTC (permalink / raw) To: Rongwei Liu; +Cc: dev, matan, viacheslavo, orika, suanmingm 25/10/2023 11:50, Rongwei Liu: > This patch adds testpmd app a runtime function to test the live > migration API. > > testpmd> mlx5 set flow_engine <active|standby> [<flag>] > Flag is optional. > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> > Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Thomas Monjalon <thomas@monjalon.net> ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v7] net/mlx5: add test for live migration 2023-10-25 9:50 ` [PATCH v7] " Rongwei Liu 2023-10-25 13:10 ` Thomas Monjalon @ 2023-10-26 8:15 ` Raslan Darawsheh 1 sibling, 0 replies; 64+ messages in thread From: Raslan Darawsheh @ 2023-10-26 8:15 UTC (permalink / raw) To: Rongwei Liu, dev, Matan Azrad, Slava Ovsiienko, Ori Kam, Suanming Mou, NBU-Contact-Thomas Monjalon (EXTERNAL) Hi, > -----Original Message----- > From: Rongwei Liu <rongweil@nvidia.com> > Sent: Wednesday, October 25, 2023 12:50 PM > To: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > <suanmingm@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net> > Subject: [PATCH v7] net/mlx5: add test for live migration > > This patch adds testpmd app a runtime function to test the live migration API. > > testpmd> mlx5 set flow_engine <active|standby> [<flag>] Flag is optional. > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> > Acked-by: Ori Kam <orika@nvidia.com> Patch applied to next-net-mlx, Kindest regards, Raslan Darawsheh ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 2/8] app/testpmd: add IPv6 extension push remove cli 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 1/8] ethdev: add IPv6 extension push remove action Rongwei Liu @ 2023-04-17 9:25 ` Rongwei Liu 2023-05-24 7:06 ` Ori Kam 2023-04-17 9:25 ` [PATCH v1 3/8] net/mlx5/hws: add no reparse support Rongwei Liu ` (5 subsequent siblings) 7 siblings, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas; +Cc: Aman Singh, Yuying Zhang Add command lines to generate IPv6 routing extension push and remove patterns and follow the raw_encap/decap style. Add the new actions to the action template parsing. Generating the action patterns 1. IPv6 routing extension push set ipv6_ext_push 1 ipv6_ext type is 43 / ipv6_routing_ext ext_type is 4 ext_next_hdr is 17 ext_seg_left is 2 / end_set 2. IPv6 routing extension remove set ipv6_ext_remove 1 ipv6_ext type is 43 / end_set Specifying the action in the template 1. actions_template_id 1 template ipv6_ext_push index 1 2. actions_template_id 1 template ipv6_ext_remove index 1 Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- app/test-pmd/cmdline_flow.c | 443 +++++++++++++++++++++++++++++++++++- 1 file changed, 442 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..ea4cebce1c 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -74,6 +74,9 @@ enum index { SET_RAW_INDEX, SET_SAMPLE_ACTIONS, SET_SAMPLE_INDEX, + SET_IPV6_EXT_REMOVE, + SET_IPV6_EXT_PUSH, + SET_IPV6_EXT_INDEX, /* Top-level command. */ FLOW, @@ -496,6 +499,8 @@ enum index { ITEM_QUOTA_STATE_NAME, ITEM_AGGR_AFFINITY, ITEM_AGGR_AFFINITY_VALUE, + ITEM_IPV6_PUSH_REMOVE_EXT, + ITEM_IPV6_PUSH_REMOVE_EXT_TYPE, /* Validate/create actions. */ ACTIONS, @@ -665,6 +670,12 @@ enum index { ACTION_QUOTA_QU_LIMIT, ACTION_QUOTA_QU_UPDATE_OP, ACTION_QUOTA_QU_UPDATE_OP_NAME, + ACTION_IPV6_EXT_REMOVE, + ACTION_IPV6_EXT_REMOVE_INDEX, + ACTION_IPV6_EXT_REMOVE_INDEX_VALUE, + ACTION_IPV6_EXT_PUSH, + ACTION_IPV6_EXT_PUSH_INDEX, + ACTION_IPV6_EXT_PUSH_INDEX_VALUE, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -731,6 +742,42 @@ struct action_raw_decap_data { uint16_t idx; }; +/** Maximum data size in struct rte_flow_action_ipv6_ext_push. */ +#define ACTION_IPV6_EXT_PUSH_MAX_DATA 512 +#define IPV6_EXT_PUSH_CONFS_MAX_NUM 8 + +/** Storage for struct rte_flow_action_ipv6_ext_push. */ +struct ipv6_ext_push_conf { + uint8_t data[ACTION_IPV6_EXT_PUSH_MAX_DATA]; + size_t size; + uint8_t type; +}; + +struct ipv6_ext_push_conf ipv6_ext_push_confs[IPV6_EXT_PUSH_CONFS_MAX_NUM]; + +/** Storage for struct rte_flow_action_ipv6_ext_push including external data. */ +struct action_ipv6_ext_push_data { + struct rte_flow_action_ipv6_ext_push conf; + uint8_t data[ACTION_IPV6_EXT_PUSH_MAX_DATA]; + uint8_t type; + uint16_t idx; +}; + +/** Storage for struct rte_flow_action_ipv6_ext_remove. */ +struct ipv6_ext_remove_conf { + struct rte_flow_action_ipv6_ext_remove conf; + uint8_t type; +}; + +struct ipv6_ext_remove_conf ipv6_ext_remove_confs[IPV6_EXT_PUSH_CONFS_MAX_NUM]; + +/** Storage for struct rte_flow_action_ipv6_ext_remove including external data. */ +struct action_ipv6_ext_remove_data { + struct rte_flow_action_ipv6_ext_remove conf; + uint8_t type; + uint16_t idx; +}; + struct vxlan_encap_conf vxlan_encap_conf = { .select_ipv4 = 1, .select_vlan = 0, @@ -2022,6 +2069,8 @@ static const enum index next_action[] = { ACTION_SEND_TO_KERNEL, ACTION_QUOTA_CREATE, ACTION_QUOTA_QU, + ACTION_IPV6_EXT_REMOVE, + ACTION_IPV6_EXT_PUSH, ZERO, }; @@ -2230,6 +2279,18 @@ static const enum index action_raw_decap[] = { ZERO, }; +static const enum index action_ipv6_ext_remove[] = { + ACTION_IPV6_EXT_REMOVE_INDEX, + ACTION_NEXT, + ZERO, +}; + +static const enum index action_ipv6_ext_push[] = { + ACTION_IPV6_EXT_PUSH_INDEX, + ACTION_NEXT, + ZERO, +}; + static const enum index action_set_tag[] = { ACTION_SET_TAG_DATA, ACTION_SET_TAG_INDEX, @@ -2293,6 +2354,22 @@ static const enum index next_action_sample[] = { ZERO, }; +static const enum index item_ipv6_push_ext[] = { + ITEM_IPV6_PUSH_REMOVE_EXT, + ZERO, +}; + +static const enum index item_ipv6_push_ext_type[] = { + ITEM_IPV6_PUSH_REMOVE_EXT_TYPE, + ZERO, +}; + +static const enum index item_ipv6_push_ext_header[] = { + ITEM_IPV6_ROUTING_EXT, + ITEM_NEXT, + ZERO, +}; + static const enum index action_modify_field_dst[] = { ACTION_MODIFY_FIELD_DST_LEVEL, ACTION_MODIFY_FIELD_DST_OFFSET, @@ -2334,6 +2411,9 @@ static int parse_set_raw_encap_decap(struct context *, const struct token *, static int parse_set_sample_action(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_set_ipv6_ext_action(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_set_init(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2411,6 +2491,22 @@ static int parse_vc_action_raw_encap_index(struct context *, static int parse_vc_action_raw_decap_index(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_action_ipv6_ext_remove(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_remove_index(struct context *ctx, + const struct token *token, + const char *str, unsigned int len, + void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_push_index(struct context *ctx, + const struct token *token, + const char *str, unsigned int len, + void *buf, + unsigned int size); static int parse_vc_action_set_meta(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, @@ -2596,6 +2692,8 @@ static int comp_set_raw_index(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_sample_index(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_set_ipv6_ext_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size); static int comp_set_modify_field_op(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_modify_field_id(struct context *, const struct token *, @@ -6472,6 +6570,48 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), .call = parse_vc, }, + [ACTION_IPV6_EXT_REMOVE] = { + .name = "ipv6_ext_remove", + .help = "IPv6 extension type, defined by set ipv6_ext_remove", + .priv = PRIV_ACTION(IPV6_EXT_REMOVE, + sizeof(struct action_ipv6_ext_remove_data)), + .next = NEXT(action_ipv6_ext_remove), + .call = parse_vc_action_ipv6_ext_remove, + }, + [ACTION_IPV6_EXT_REMOVE_INDEX] = { + .name = "index", + .help = "the index of ipv6_ext_remove", + .next = NEXT(NEXT_ENTRY(ACTION_IPV6_EXT_REMOVE_INDEX_VALUE)), + }, + [ACTION_IPV6_EXT_REMOVE_INDEX_VALUE] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "unsigned integer value", + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_ipv6_ext_remove_index, + .comp = comp_set_ipv6_ext_index, + }, + [ACTION_IPV6_EXT_PUSH] = { + .name = "ipv6_ext_push", + .help = "IPv6 extension data, defined by set ipv6_ext_push", + .priv = PRIV_ACTION(IPV6_EXT_PUSH, + sizeof(struct action_ipv6_ext_push_data)), + .next = NEXT(action_ipv6_ext_push), + .call = parse_vc_action_ipv6_ext_push, + }, + [ACTION_IPV6_EXT_PUSH_INDEX] = { + .name = "index", + .help = "the index of ipv6_ext_push", + .next = NEXT(NEXT_ENTRY(ACTION_IPV6_EXT_PUSH_INDEX_VALUE)), + }, + [ACTION_IPV6_EXT_PUSH_INDEX_VALUE] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "unsigned integer value", + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_ipv6_ext_push_index, + .comp = comp_set_ipv6_ext_index, + }, /* Top level command. */ [SET] = { .name = "set", @@ -6481,7 +6621,9 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY (SET_RAW_ENCAP, SET_RAW_DECAP, - SET_SAMPLE_ACTIONS)), + SET_SAMPLE_ACTIONS, + SET_IPV6_EXT_REMOVE, + SET_IPV6_EXT_PUSH)), .call = parse_set_init, }, /* Sub-level commands. */ @@ -6529,6 +6671,49 @@ static const struct token token_list[] = { 0, RAW_SAMPLE_CONFS_MAX_NUM - 1)), .call = parse_set_sample_action, }, + [SET_IPV6_EXT_PUSH] = { + .name = "ipv6_ext_push", + .help = "set IPv6 extension header", + .next = NEXT(NEXT_ENTRY(SET_IPV6_EXT_INDEX)), + .args = ARGS(ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct buffer, port), + sizeof(((struct buffer *)0)->port), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1)), + .call = parse_set_ipv6_ext_action, + }, + [SET_IPV6_EXT_REMOVE] = { + .name = "ipv6_ext_remove", + .help = "set IPv6 extension header", + .next = NEXT(NEXT_ENTRY(SET_IPV6_EXT_INDEX)), + .args = ARGS(ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct buffer, port), + sizeof(((struct buffer *)0)->port), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1)), + .call = parse_set_ipv6_ext_action, + }, + [SET_IPV6_EXT_INDEX] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "index of ipv6 extension push/remove actions", + .next = NEXT(item_ipv6_push_ext), + .call = parse_port, + }, + [ITEM_IPV6_PUSH_REMOVE_EXT] = { + .name = "ipv6_ext", + .help = "set IPv6 extension header", + .priv = PRIV_ITEM(IPV6_EXT, + sizeof(struct rte_flow_item_ipv6_ext)), + .next = NEXT(item_ipv6_push_ext_type), + .call = parse_vc, + }, + [ITEM_IPV6_PUSH_REMOVE_EXT_TYPE] = { + .name = "type", + .help = "set IPv6 extension type", + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_ext, + next_hdr)), + .next = NEXT(item_ipv6_push_ext_header, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + }, [ACTION_SET_TAG] = { .name = "set_tag", .help = "set tag", @@ -8843,6 +9028,140 @@ parse_vc_action_raw_decap(struct context *ctx, const struct token *token, return ret; } +static int +parse_vc_action_ipv6_ext_remove(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_ipv6_ext_remove_data *ipv6_ext_remove_data = NULL; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Copy the headers to the buffer. */ + ipv6_ext_remove_data = ctx->object; + ipv6_ext_remove_data->conf.type = ipv6_ext_remove_confs[0].type; + action->conf = &ipv6_ext_remove_data->conf; + return ret; +} + +static int +parse_vc_action_ipv6_ext_remove_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct action_ipv6_ext_remove_data *action_ipv6_ext_remove_data; + struct rte_flow_action *action; + const struct arg *arg; + struct buffer *out = buf; + int ret; + uint16_t idx; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + arg = ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct action_ipv6_ext_remove_data, idx), + sizeof(((struct action_ipv6_ext_remove_data *)0)->idx), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1); + if (push_args(ctx, arg)) + return -1; + ret = parse_int(ctx, token, str, len, NULL, 0); + if (ret < 0) { + pop_args(ctx); + return -1; + } + if (!ctx->object) + return len; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + action_ipv6_ext_remove_data = ctx->object; + idx = action_ipv6_ext_remove_data->idx; + action_ipv6_ext_remove_data->conf.type = ipv6_ext_remove_confs[idx].type; + action->conf = &action_ipv6_ext_remove_data->conf; + return len; +} + +static int +parse_vc_action_ipv6_ext_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_ipv6_ext_push_data *ipv6_ext_push_data = NULL; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Copy the headers to the buffer. */ + ipv6_ext_push_data = ctx->object; + ipv6_ext_push_data->conf.type = ipv6_ext_push_confs[0].type; + ipv6_ext_push_data->conf.data = ipv6_ext_push_confs[0].data; + ipv6_ext_push_data->conf.size = ipv6_ext_push_confs[0].size; + action->conf = &ipv6_ext_push_data->conf; + return ret; +} + +static int +parse_vc_action_ipv6_ext_push_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct action_ipv6_ext_push_data *action_ipv6_ext_push_data; + struct rte_flow_action *action; + const struct arg *arg; + struct buffer *out = buf; + int ret; + uint16_t idx; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + arg = ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct action_ipv6_ext_push_data, idx), + sizeof(((struct action_ipv6_ext_push_data *)0)->idx), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1); + if (push_args(ctx, arg)) + return -1; + ret = parse_int(ctx, token, str, len, NULL, 0); + if (ret < 0) { + pop_args(ctx); + return -1; + } + if (!ctx->object) + return len; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + action_ipv6_ext_push_data = ctx->object; + idx = action_ipv6_ext_push_data->idx; + action_ipv6_ext_push_data->conf.type = ipv6_ext_push_confs[idx].type; + action_ipv6_ext_push_data->conf.size = ipv6_ext_push_confs[idx].size; + action_ipv6_ext_push_data->conf.data = ipv6_ext_push_confs[idx].data; + action->conf = &action_ipv6_ext_push_data->conf; + return len; +} + static int parse_vc_action_set_meta(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, @@ -10532,6 +10851,35 @@ parse_set_sample_action(struct context *ctx, const struct token *token, return len; } +/** Parse set command, initialize output buffer for subsequent tokens. */ +static int +parse_set_ipv6_ext_action(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + /* Make sure buffer is large enough. */ + if (size < sizeof(*out)) + return -1; + ctx->objdata = 0; + ctx->objmask = NULL; + ctx->object = out; + if (!out->command) + return -1; + out->command = ctx->curr; + /* For ipv6_ext_push/remove we need is pattern */ + out->args.vc.pattern = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; +} + /** * Parse set raw_encap/raw_decap command, * initialize output buffer for subsequent tokens. @@ -10961,6 +11309,24 @@ comp_set_raw_index(struct context *ctx, const struct token *token, return nb; } +/** Complete index number for set raw_ipv6_ext_push/ipv6_ext_remove commands. */ +static int +comp_set_ipv6_ext_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + uint16_t idx = 0; + uint16_t nb = 0; + + RTE_SET_USED(ctx); + RTE_SET_USED(token); + for (idx = 0; idx < IPV6_EXT_PUSH_CONFS_MAX_NUM; ++idx) { + if (buf && idx == ent) + return snprintf(buf, size, "%u", idx); + ++nb; + } + return nb; +} + /** Complete index number for set raw_encap/raw_decap commands. */ static int comp_set_sample_index(struct context *ctx, const struct token *token, @@ -11855,6 +12221,78 @@ flow_item_default_mask(const struct rte_flow_item *item) return mask; } +/** Dispatch parsed buffer to function calls. */ +static void +cmd_set_ipv6_ext_parsed(const struct buffer *in) +{ + uint32_t n = in->args.vc.pattern_n; + int i = 0; + struct rte_flow_item *item = NULL; + size_t size = 0; + uint8_t *data = NULL; + uint8_t *type = NULL; + size_t *total_size = NULL; + uint16_t idx = in->port; /* We borrow port field as index */ + struct rte_flow_item_ipv6_routing_ext *ext; + const struct rte_flow_item_ipv6_ext *ipv6_ext; + + RTE_ASSERT(in->command == SET_IPV6_EXT_PUSH || + in->command == SET_IPV6_EXT_REMOVE); + + if (in->command == SET_IPV6_EXT_REMOVE) { + if (n != 1 || in->args.vc.pattern->type != + RTE_FLOW_ITEM_TYPE_IPV6_EXT) { + fprintf(stderr, "Error - Not supported item\n"); + return; + } + type = (uint8_t *)&ipv6_ext_remove_confs[idx].type; + item = in->args.vc.pattern; + ipv6_ext = item->spec; + *type = ipv6_ext->next_hdr; + return; + } + + total_size = &ipv6_ext_push_confs[idx].size; + data = (uint8_t *)&ipv6_ext_push_confs[idx].data; + type = (uint8_t *)&ipv6_ext_push_confs[idx].type; + + *total_size = 0; + memset(data, 0x00, ACTION_RAW_ENCAP_MAX_DATA); + for (i = n - 1 ; i >= 0; --i) { + item = in->args.vc.pattern + i; + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_IPV6_EXT: + ipv6_ext = item->spec; + *type = ipv6_ext->next_hdr; + break; + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: + ext = (struct rte_flow_item_ipv6_routing_ext *)(uintptr_t)item->spec; + if (!ext->hdr.hdr_len) { + size = sizeof(struct rte_ipv6_routing_ext) + + (ext->hdr.segments_left << 4); + ext->hdr.hdr_len = ext->hdr.segments_left << 1; + /* Indicate no TLV once SRH. */ + if (ext->hdr.type == 4) + ext->hdr.last_entry = ext->hdr.segments_left - 1; + } else { + size = sizeof(struct rte_ipv6_routing_ext) + + (ext->hdr.hdr_len << 3); + } + *total_size += size; + memcpy(data, ext, size); + break; + default: + fprintf(stderr, "Error - Not supported item\n"); + goto error; + } + } + RTE_ASSERT((*total_size) <= ACTION_IPV6_EXT_PUSH_MAX_DATA); + return; +error: + *total_size = 0; + memset(data, 0x00, ACTION_IPV6_EXT_PUSH_MAX_DATA); +} + /** Dispatch parsed buffer to function calls. */ static void cmd_set_raw_parsed_sample(const struct buffer *in) @@ -11988,6 +12426,9 @@ cmd_set_raw_parsed(const struct buffer *in) if (in->command == SET_SAMPLE_ACTIONS) return cmd_set_raw_parsed_sample(in); + else if (in->command == SET_IPV6_EXT_PUSH || + in->command == SET_IPV6_EXT_REMOVE) + return cmd_set_ipv6_ext_parsed(in); RTE_ASSERT(in->command == SET_RAW_ENCAP || in->command == SET_RAW_DECAP); if (in->command == SET_RAW_ENCAP) { -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v1 2/8] app/testpmd: add IPv6 extension push remove cli 2023-04-17 9:25 ` [PATCH v1 2/8] app/testpmd: add IPv6 extension push remove cli Rongwei Liu @ 2023-05-24 7:06 ` Ori Kam 0 siblings, 0 replies; 64+ messages in thread From: Ori Kam @ 2023-05-24 7:06 UTC (permalink / raw) To: Rongwei Liu, dev, Matan Azrad, Slava Ovsiienko, NBU-Contact-Thomas Monjalon (EXTERNAL) Cc: Aman Singh, Yuying Zhang Hi Rongwei, > -----Original Message----- > From: Rongwei Liu <rongweil@nvidia.com> > Sent: Monday, April 17, 2023 12:26 PM > > Add command lines to generate IPv6 routing extension push and > remove patterns and follow the raw_encap/decap style. > > Add the new actions to the action template parsing. > > Generating the action patterns > 1. IPv6 routing extension push > set ipv6_ext_push 1 ipv6_ext type is 43 / > ipv6_routing_ext ext_type is 4 > ext_next_hdr is 17 ext_seg_left is 2 / end_set > 2. IPv6 routing extension remove > set ipv6_ext_remove 1 ipv6_ext type is 43 / end_set > > Specifying the action in the template > 1. actions_template_id 1 template ipv6_ext_push index 1 > 2. actions_template_id 1 template ipv6_ext_remove index 1 > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com> > --- Acked-by: Ori Kam <orika@nvidia.com> Best, Ori ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 3/8] net/mlx5/hws: add no reparse support 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 1/8] ethdev: add IPv6 extension push remove action Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 2/8] app/testpmd: add IPv6 extension push remove cli Rongwei Liu @ 2023-04-17 9:25 ` Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 4/8] net/mlx5: sample the srv6 last segment Rongwei Liu ` (4 subsequent siblings) 7 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas Allocate two groups of RTC to control hardware reparsing when flow is updated. 1. Group 1 always reparse the traffic after packet is modified by any hardware module. This is the default behavior which is the same as before. 2. Group 2 doesn't perform any packets reparsing. This will help the complex flow rules. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_cmd.c | 5 +- drivers/net/mlx5/hws/mlx5dr_cmd.h | 1 + drivers/net/mlx5/hws/mlx5dr_debug.c | 8 +-- drivers/net/mlx5/hws/mlx5dr_matcher.c | 80 +++++++++++++++++---------- drivers/net/mlx5/hws/mlx5dr_matcher.h | 12 ++-- drivers/net/mlx5/hws/mlx5dr_rule.c | 65 ++++++++++++++++------ 6 files changed, 117 insertions(+), 54 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c index 0adcedd9c9..42bf1980db 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.c +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -283,7 +283,10 @@ mlx5dr_cmd_rtc_create(struct ibv_context *ctx, MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); - MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + if (rtc_attr->is_reparse) + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + else + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_NEVER); devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); if (!devx_obj->obj) { diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h index 3f40c085be..b225171d4c 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.h +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -51,6 +51,7 @@ struct mlx5dr_cmd_rtc_create_attr { uint8_t match_definer_1; bool is_frst_jumbo; bool is_scnd_range; + uint8_t is_reparse; }; struct mlx5dr_cmd_alias_obj_create_attr { diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 6b32ac4ee6..b8049a173d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -224,9 +224,9 @@ static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) } ret = fprintf(f, ",%d,%d,%d,%d", - matcher->match_ste.rtc_0 ? matcher->match_ste.rtc_0->id : 0, + matcher->match_ste.rtc_0_reparse ? matcher->match_ste.rtc_0_reparse->id : 0, ste_0 ? (int)ste_0->id : -1, - matcher->match_ste.rtc_1 ? matcher->match_ste.rtc_1->id : 0, + matcher->match_ste.rtc_1_reparse ? matcher->match_ste.rtc_1_reparse->id : 0, ste_1 ? (int)ste_1->id : -1); if (ret < 0) goto out_err; @@ -243,9 +243,9 @@ static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) } ret = fprintf(f, ",%d,%d,%d,%d,%d\n", - matcher->action_ste.rtc_0 ? matcher->action_ste.rtc_0->id : 0, + matcher->action_ste.rtc_0_reparse ? matcher->action_ste.rtc_0_reparse->id : 0, ste_0 ? (int)ste_0->id : -1, - matcher->action_ste.rtc_1 ? matcher->action_ste.rtc_1->id : 0, + matcher->action_ste.rtc_1_reparse ? matcher->action_ste.rtc_1_reparse->id : 0, ste_1 ? (int)ste_1->id : -1, is_shared && !is_root ? matcher->match_ste.aliased_rtc_0->id : 0); diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 1fe7ec1bc3..652d50f73a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -101,7 +101,7 @@ static int mlx5dr_matcher_shared_create_alias_rtc(struct mlx5dr_matcher *matcher ctx->ibv_ctx, ctx->local_ibv_ctx, ctx->caps->shared_vhca_id, - matcher->match_ste.rtc_0->id, + matcher->match_ste.rtc_0_reparse->id, MLX5_GENERAL_OBJ_TYPE_RTC, &matcher->match_ste.aliased_rtc_0); if (ret) { @@ -156,7 +156,7 @@ static uint32_t mlx5dr_matcher_connect_get_rtc0(struct mlx5dr_matcher *matcher) { if (!matcher->match_ste.aliased_rtc_0) - return matcher->match_ste.rtc_0->id; + return matcher->match_ste.rtc_0_reparse->id; else return matcher->match_ste.aliased_rtc_0->id; } @@ -233,10 +233,10 @@ static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) /* Connect to next */ if (next) { - if (next->match_ste.rtc_0) - ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; - if (next->match_ste.rtc_1) - ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + if (next->match_ste.rtc_0_reparse) + ft_attr.rtc_id_0 = next->match_ste.rtc_0_reparse->id; + if (next->match_ste.rtc_1_reparse) + ft_attr.rtc_id_1 = next->match_ste.rtc_1_reparse->id; ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); if (ret) { @@ -248,10 +248,10 @@ static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) /* Connect to previous */ ft = prev ? prev->end_ft : tbl->ft; - if (matcher->match_ste.rtc_0) - ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; - if (matcher->match_ste.rtc_1) - ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + if (matcher->match_ste.rtc_0_reparse) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0_reparse->id; + if (matcher->match_ste.rtc_1_reparse) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1_reparse->id; ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); if (ret) { @@ -296,10 +296,10 @@ static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) if (next) { /* Connect previous end FT to next RTC if exists */ - if (next->match_ste.rtc_0) - ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; - if (next->match_ste.rtc_1) - ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + if (next->match_ste.rtc_0_reparse) + ft_attr.rtc_id_0 = next->match_ste.rtc_0_reparse->id; + if (next->match_ste.rtc_1_reparse) + ft_attr.rtc_id_1 = next->match_ste.rtc_1_reparse->id; } else { /* Matcher is last, point prev end FT to default miss */ mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, @@ -470,10 +470,11 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, struct mlx5dr_pool_chunk *ste; int ret; + rtc_attr.is_reparse = true; switch (rtc_type) { case DR_MATCHER_RTC_TYPE_MATCH: - rtc_0 = &matcher->match_ste.rtc_0; - rtc_1 = &matcher->match_ste.rtc_1; + rtc_0 = &matcher->match_ste.rtc_0_reparse; + rtc_1 = &matcher->match_ste.rtc_1_reparse; ste_pool = matcher->match_ste.pool; ste = &matcher->match_ste.ste; ste->order = attr->table.sz_col_log + attr->table.sz_row_log; @@ -537,8 +538,8 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, break; case DR_MATCHER_RTC_TYPE_STE_ARRAY: - rtc_0 = &matcher->action_ste.rtc_0; - rtc_1 = &matcher->action_ste.rtc_1; + rtc_0 = &matcher->action_ste.rtc_0_reparse; + rtc_1 = &matcher->action_ste.rtc_1_reparse; ste_pool = matcher->action_ste.pool; ste = &matcher->action_ste.ste; ste->order = rte_log2_u32(matcher->action_ste.max_stes) + @@ -558,6 +559,7 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, return rte_errno; } +rertc: devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); rtc_attr.pd = ctx->pd_num; @@ -574,8 +576,8 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); if (!*rtc_0) { - DR_LOG(ERR, "Failed to create matcher RTC of type %s", - mlx5dr_matcher_rtc_type_to_str(rtc_type)); + DR_LOG(ERR, "Failed to create matcher RTC of type %s, reparse %u", + mlx5dr_matcher_rtc_type_to_str(rtc_type), rtc_attr.is_reparse); goto free_ste; } @@ -590,12 +592,25 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); if (!*rtc_1) { - DR_LOG(ERR, "Failed to create peer matcher RTC of type %s", - mlx5dr_matcher_rtc_type_to_str(rtc_type)); + DR_LOG(ERR, "Failed to create peer matcher RTC of type %s, reparse %u", + mlx5dr_matcher_rtc_type_to_str(rtc_type), rtc_attr.is_reparse); goto destroy_rtc_0; } } + /* RTC is created in reparse then no_reparse order and fw wqe. */ + if (rtc_attr.is_reparse && !mlx5dr_matcher_req_fw_wqe(matcher)) { + rtc_attr.is_reparse = false; + if (rtc_type == DR_MATCHER_RTC_TYPE_MATCH) { + rtc_0 = &matcher->match_ste.rtc_0_no_reparse; + rtc_1 = &matcher->match_ste.rtc_1_no_reparse; + } else { + rtc_0 = &matcher->action_ste.rtc_0_no_reparse; + rtc_1 = &matcher->action_ste.rtc_1_no_reparse; + } + goto rertc; + } + return 0; destroy_rtc_0: @@ -609,21 +624,25 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, enum mlx5dr_matcher_rtc_type rtc_type) { + struct mlx5dr_devx_obj *rtc_0, *rtc_1, *rtc_2, *rtc_3; struct mlx5dr_table *tbl = matcher->tbl; - struct mlx5dr_devx_obj *rtc_0, *rtc_1; struct mlx5dr_pool_chunk *ste; struct mlx5dr_pool *ste_pool; switch (rtc_type) { case DR_MATCHER_RTC_TYPE_MATCH: - rtc_0 = matcher->match_ste.rtc_0; - rtc_1 = matcher->match_ste.rtc_1; + rtc_0 = matcher->match_ste.rtc_0_reparse; + rtc_1 = matcher->match_ste.rtc_1_reparse; + rtc_2 = matcher->match_ste.rtc_0_no_reparse; + rtc_3 = matcher->match_ste.rtc_1_no_reparse; ste_pool = matcher->match_ste.pool; ste = &matcher->match_ste.ste; break; case DR_MATCHER_RTC_TYPE_STE_ARRAY: - rtc_0 = matcher->action_ste.rtc_0; - rtc_1 = matcher->action_ste.rtc_1; + rtc_0 = matcher->action_ste.rtc_0_reparse; + rtc_1 = matcher->action_ste.rtc_1_reparse; + rtc_2 = matcher->action_ste.rtc_0_no_reparse; + rtc_3 = matcher->action_ste.rtc_1_no_reparse; ste_pool = matcher->action_ste.pool; ste = &matcher->action_ste.ste; break; @@ -631,10 +650,15 @@ static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, return; } - if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { mlx5dr_cmd_destroy_obj(rtc_1); + if (rtc_3) + mlx5dr_cmd_destroy_obj(rtc_3); + } mlx5dr_cmd_destroy_obj(rtc_0); + if (rtc_2) + mlx5dr_cmd_destroy_obj(rtc_2); if (rtc_type == DR_MATCHER_RTC_TYPE_MATCH) mlx5dr_pool_chunk_free(ste_pool, ste); } diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index 4759068ab4..02fd283cd1 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -43,8 +43,10 @@ struct mlx5dr_match_template { struct mlx5dr_matcher_match_ste { struct mlx5dr_pool_chunk ste; - struct mlx5dr_devx_obj *rtc_0; - struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_devx_obj *rtc_0_reparse; + struct mlx5dr_devx_obj *rtc_1_reparse; + struct mlx5dr_devx_obj *rtc_0_no_reparse; + struct mlx5dr_devx_obj *rtc_1_no_reparse; struct mlx5dr_pool *pool; /* Currently not support FDB aliased */ struct mlx5dr_devx_obj *aliased_rtc_0; @@ -53,8 +55,10 @@ struct mlx5dr_matcher_match_ste { struct mlx5dr_matcher_action_ste { struct mlx5dr_pool_chunk ste; struct mlx5dr_pool_chunk stc; - struct mlx5dr_devx_obj *rtc_0; - struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_devx_obj *rtc_0_reparse; + struct mlx5dr_devx_obj *rtc_1_reparse; + struct mlx5dr_devx_obj *rtc_0_no_reparse; + struct mlx5dr_devx_obj *rtc_1_no_reparse; struct mlx5dr_pool *pool; uint8_t max_stes; }; diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index 2418ca0b26..70c6c08741 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -40,11 +40,35 @@ static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, } } +static void mlxdr_rule_set_wqe_rtc_id(struct mlx5dr_send_ring_dep_wqe *wqe, + struct mlx5dr_matcher *matcher, + bool reparse, bool mirror) +{ + if (!mirror && !reparse) { + wqe->rtc_0 = matcher->match_ste.rtc_0_no_reparse->id; + wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0_no_reparse->id : 0; + } else if (!mirror && reparse) { + wqe->rtc_0 = matcher->match_ste.rtc_0_reparse->id; + wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0_reparse->id : 0; + } else if (mirror && reparse) { + wqe->rtc_1 = matcher->match_ste.rtc_1_reparse->id; + wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1_reparse->id : 0; + } else if (mirror && !reparse) { + wqe->rtc_1 = matcher->match_ste.rtc_1_no_reparse->id; + wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1_no_reparse->id : 0; + } +} + static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, struct mlx5dr_rule *rule, const struct rte_flow_item *items, struct mlx5dr_match_template *mt, - void *user_data) + void *user_data, + bool reparse) { struct mlx5dr_matcher *matcher = rule->matcher; struct mlx5dr_table *tbl = matcher->tbl; @@ -56,9 +80,7 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, switch (tbl->type) { case MLX5DR_TABLE_TYPE_NIC_RX: case MLX5DR_TABLE_TYPE_NIC_TX: - dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; - dep_wqe->retry_rtc_0 = matcher->col_matcher ? - matcher->col_matcher->match_ste.rtc_0->id : 0; + mlxdr_rule_set_wqe_rtc_id(dep_wqe, matcher, reparse, false); dep_wqe->rtc_1 = 0; dep_wqe->retry_rtc_1 = 0; break; @@ -67,18 +89,14 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, mlx5dr_rule_skip(matcher, mt, items, &skip_rx, &skip_tx); if (!skip_rx) { - dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; - dep_wqe->retry_rtc_0 = matcher->col_matcher ? - matcher->col_matcher->match_ste.rtc_0->id : 0; + mlxdr_rule_set_wqe_rtc_id(dep_wqe, matcher, reparse, false); } else { dep_wqe->rtc_0 = 0; dep_wqe->retry_rtc_0 = 0; } if (!skip_tx) { - dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; - dep_wqe->retry_rtc_1 = matcher->col_matcher ? - matcher->col_matcher->match_ste.rtc_1->id : 0; + mlxdr_rule_set_wqe_rtc_id(dep_wqe, matcher, reparse, true); } else { dep_wqe->rtc_1 = 0; dep_wqe->retry_rtc_1 = 0; @@ -265,8 +283,9 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, } mlx5dr_rule_create_init(rule, &ste_attr, &apply); - mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data); - mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data); + /* FW WQE doesn't look on rtc reparse, use default REPARSE_ALWAYS. */ + mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data, true); + mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data, true); ste_attr.direct_index = 0; ste_attr.rtc_0 = match_wqe.rtc_0; @@ -348,6 +367,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, struct mlx5dr_actions_apply_data apply; struct mlx5dr_send_engine *queue; uint8_t total_stes, action_stes; + bool matcher_reparse; int i, ret; /* Insert rule using FW WQE if cannot use GTA WQE */ @@ -368,7 +388,9 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, * dep_wqe buffers (ctrl, data) are also reused for all STE writes. */ dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); - mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr->user_data); + /* Jumbo matcher reparse is off. */ + matcher_reparse = !is_jumbo && (at->setters[1].flags & ASF_REPARSE); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr->user_data, matcher_reparse); ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; ste_attr.wqe_data = &dep_wqe->wqe_data; @@ -389,9 +411,6 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, mlx5dr_send_abort_new_dep_wqe(queue); return ret; } - /* Skip RX/TX based on the dep_wqe init */ - ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; - ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; /* Action STEs are written to a specific index last to first */ ste_attr.direct_index = rule->action_ste_idx + action_stes; apply.next_direct_idx = ste_attr.direct_index; @@ -400,7 +419,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, } for (i = total_stes; i-- > 0;) { - mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + mlx5dr_action_apply_setter(&apply, setter, !i && is_jumbo); if (i == 0) { /* Handle last match STE. @@ -431,9 +450,21 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ? attr->rule_idx : 0; } else { + if (setter->flags & ASF_REPARSE) { + ste_attr.rtc_0 = dep_wqe->rtc_0 ? + matcher->action_ste.rtc_0_reparse->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? + matcher->action_ste.rtc_1_reparse->id : 0; + } else { + ste_attr.rtc_0 = dep_wqe->rtc_0 ? + matcher->action_ste.rtc_0_no_reparse->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? + matcher->action_ste.rtc_1_no_reparse->id : 0; + } apply.next_direct_idx = --ste_attr.direct_index; } + setter--; mlx5dr_send_ste(queue, &ste_attr); } -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 4/8] net/mlx5: sample the srv6 last segment 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu ` (2 preceding siblings ...) 2023-04-17 9:25 ` [PATCH v1 3/8] net/mlx5/hws: add no reparse support Rongwei Liu @ 2023-04-17 9:25 ` Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 5/8] net/mlx5: generate srv6 modify header resource Rongwei Liu ` (3 subsequent siblings) 7 siblings, 1 reply; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas When removing the IPv6 routing extension header from the packets, the destination address should be updated to the last one in the segment list. Enlarge the hardware sample scope to cover the last segment. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- drivers/net/mlx5/mlx5.c | 42 +++++++++++++++++++++++++----------- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 15 +++++++++++++ 3 files changed, 46 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f24e20a2ef..1418ffdea7 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1054,7 +1054,7 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) struct mlx5_devx_graph_node_attr node = { .modify_field_select = 0, }; - uint32_t ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; + uint32_t i, ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_common_dev_config *config = &priv->sh->cdev->config; void *fp = NULL, *ibv_ctx = priv->sh->cdev->ctx; @@ -1084,10 +1084,18 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) node.next_header_field_size = 0x8; node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; node.in[0].compare_condition_value = IPPROTO_ROUTING; - node.sample[0].flow_match_sample_en = 1; - /* First come first serve no matter inner or outer. */ - node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; - node.sample[0].flow_match_sample_offset_mode = MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* Final IPv6 address. */ + for (i = 0; i <= 4 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + node.sample[i].flow_match_sample_en = 1; + node.sample[i].flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* First come first serve no matter inner or outer. */ + node.sample[i].flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.sample[i].flow_match_sample_field_base_offset = + (i + 1) * sizeof(uint32_t); /* in bytes */ + } + node.sample[0].flow_match_sample_field_base_offset = 0; node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; node.out[0].compare_condition_value = IPPROTO_TCP; node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; @@ -1100,8 +1108,8 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) goto error; } priv->sh->srh_flex_parser.flex.devx_fp->devx_obj = fp; - priv->sh->srh_flex_parser.flex.mapnum = 1; - priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 1; + priv->sh->srh_flex_parser.flex.mapnum = 5; + priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 5; ret = mlx5_devx_cmd_query_parse_samples(fp, ids, priv->sh->srh_flex_parser.flex.mapnum, &priv->sh->srh_flex_parser.flex.devx_fp->anchor_id); @@ -1109,12 +1117,22 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) DRV_LOG(ERR, "Failed to query sample IDs."); goto error; } - ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[0], - &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[0]); - if (ret) { - DRV_LOG(ERR, "Failed to query sample id information."); - goto error; + for (i = 0; i <= 4 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[i], + &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[i]); + if (ret) { + DRV_LOG(ERR, "Failed to query sample id %u information.", ids[i]); + goto error; + } + } + for (i = 0; i <= 4 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + priv->sh->srh_flex_parser.flex.devx_fp->sample_ids[i] = ids[i]; + priv->sh->srh_flex_parser.flex.map[i].width = sizeof(uint32_t) * CHAR_BIT; + priv->sh->srh_flex_parser.flex.map[i].reg_id = i; + priv->sh->srh_flex_parser.flex.map[i].shift = + (i + 1) * sizeof(uint32_t) * CHAR_BIT; } + priv->sh->srh_flex_parser.flex.map[0].shift = 0; return 0; error: if (fp) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9eae692037..3fbec4db9e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1323,6 +1323,7 @@ struct mlx5_flex_pattern_field { uint16_t shift:5; uint16_t reg_id:5; }; + #define MLX5_INVALID_SAMPLE_REG_ID 0x1F /* Port flex item context. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1d116ea0f6..821c6ca281 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2666,4 +2666,19 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused) #endif return UINT32_MAX; } + +static __rte_always_inline void * +flow_hw_get_dev_from_ctx(void *dr_ctx) +{ + uint16_t port; + struct mlx5_priv *priv; + + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) + return &rte_eth_devices[port]; + } + return NULL; +} + #endif /* RTE_PMD_MLX5_FLOW_H_ */ -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v2 0/6] support IPv6 extension push remove 2023-04-17 9:25 ` [PATCH v1 4/8] net/mlx5: sample the srv6 last segment Rongwei Liu @ 2023-10-31 9:42 ` Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu ` (6 more replies) 0 siblings, 7 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 9:42 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Support IPv6 extension push/remove in MLX5 PMD. Routing extension is the only supported type. v2: add reparse control and rebase. Rongwei Liu (6): net/mlx5: sample the srv6 last segment net/mlx5/hws: fix potential wrong errno value net/mlx5/hws: add IPv6 routing extension push remove actions net/mlx5/hws: add setter for IPv6 routing push remove net/mlx5: implement IPv6 routing push remove net/mlx5/hws: add stc reparse support for srv6 push pop doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 +- doc/guides/rel_notes/release_23_11.rst | 2 + drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 29 ++ drivers/net/mlx5/hws/mlx5dr_action.c | 621 ++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 17 +- drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + drivers/net/mlx5/mlx5.c | 41 +- drivers/net/mlx5/mlx5.h | 7 + drivers/net/mlx5/mlx5_flow.h | 65 ++- drivers/net/mlx5/mlx5_flow_hw.c | 282 ++++++++++- 12 files changed, 1033 insertions(+), 47 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v2 1/6] net/mlx5: sample the srv6 last segment 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu @ 2023-10-31 9:42 ` Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 2/6] net/mlx5/hws: fix potential wrong errno value Rongwei Liu ` (5 subsequent siblings) 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 9:42 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas When removing the IPv6 routing extension header from the packets, the destination address should be updated to the last one in the segment list. Enlarge the hardware sample scope to cover the last segment. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5.c | 41 ++++++++++++++++++++++++++++++----------- drivers/net/mlx5/mlx5.h | 6 ++++++ 2 files changed, 36 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f929d6547c..92d66e8f23 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1067,6 +1067,7 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) struct mlx5_devx_graph_node_attr node = { .modify_field_select = 0, }; + uint32_t i; uint32_t ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_common_dev_config *config = &priv->sh->cdev->config; @@ -1100,10 +1101,18 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) node.next_header_field_size = 0x8; node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; node.in[0].compare_condition_value = IPPROTO_ROUTING; - node.sample[0].flow_match_sample_en = 1; - /* First come first serve no matter inner or outer. */ - node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; - node.sample[0].flow_match_sample_offset_mode = MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* Final IPv6 address. */ + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + node.sample[i].flow_match_sample_en = 1; + node.sample[i].flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* First come first serve no matter inner or outer. */ + node.sample[i].flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.sample[i].flow_match_sample_field_base_offset = + (i + 1) * sizeof(uint32_t); /* in bytes */ + } + node.sample[0].flow_match_sample_field_base_offset = 0; node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; node.out[0].compare_condition_value = IPPROTO_TCP; node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; @@ -1116,8 +1125,8 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) goto error; } priv->sh->srh_flex_parser.flex.devx_fp->devx_obj = fp; - priv->sh->srh_flex_parser.flex.mapnum = 1; - priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 1; + priv->sh->srh_flex_parser.flex.mapnum = MLX5_SRV6_SAMPLE_NUM; + priv->sh->srh_flex_parser.flex.devx_fp->num_samples = MLX5_SRV6_SAMPLE_NUM; ret = mlx5_devx_cmd_query_parse_samples(fp, ids, priv->sh->srh_flex_parser.flex.mapnum, &priv->sh->srh_flex_parser.flex.devx_fp->anchor_id); @@ -1125,12 +1134,22 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) DRV_LOG(ERR, "Failed to query sample IDs."); goto error; } - ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[0], - &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[0]); - if (ret) { - DRV_LOG(ERR, "Failed to query sample id information."); - goto error; + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[i], + &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[i]); + if (ret) { + DRV_LOG(ERR, "Failed to query sample id %u information.", ids[i]); + goto error; + } + } + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + priv->sh->srh_flex_parser.flex.devx_fp->sample_ids[i] = ids[i]; + priv->sh->srh_flex_parser.flex.map[i].width = sizeof(uint32_t) * CHAR_BIT; + priv->sh->srh_flex_parser.flex.map[i].reg_id = i; + priv->sh->srh_flex_parser.flex.map[i].shift = + (i + 1) * sizeof(uint32_t) * CHAR_BIT; } + priv->sh->srh_flex_parser.flex.map[0].shift = 0; return 0; error: if (fp) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a20acb6ca8..f13a56ee9e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1335,6 +1335,7 @@ struct mlx5_flex_pattern_field { uint16_t shift:5; uint16_t reg_id:5; }; + #define MLX5_INVALID_SAMPLE_REG_ID 0x1F /* Port flex item context. */ @@ -1346,6 +1347,11 @@ struct mlx5_flex_item { struct mlx5_flex_pattern_field map[MLX5_FLEX_ITEM_MAPPING_NUM]; }; +/* + * Sample an IPv6 address and the first dword of SRv6 header. + * Then it is 16 + 4 = 20 bytes which is 5 dwords. + */ +#define MLX5_SRV6_SAMPLE_NUM 5 /* Mlx5 internal flex parser profile structure. */ struct mlx5_internal_flex_parser_profile { uint32_t refcnt; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v2 2/6] net/mlx5/hws: fix potential wrong errno value 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu @ 2023-10-31 9:42 ` Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu ` (4 subsequent siblings) 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 9:42 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: hamdani, Alex Vesker A valid rte_errno is desired when DR layer api returns error and it can't over-write the value set by under-layer. Fixes: 890db3e2b90 ("net/mlx5/hws: support insert header action") Cc: hamdani@nvidia.com Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 59be8ae2c5..76ca57d302 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -2262,6 +2262,7 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, if (!num_of_hdrs) { DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero"); + rte_errno = EINVAL; return NULL; } @@ -2309,7 +2310,6 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, reformat_hdrs, log_bulk_size); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); - rte_errno = EINVAL; goto free_reformat_hdrs; } -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v2 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 2/6] net/mlx5/hws: fix potential wrong errno value Rongwei Liu @ 2023-10-31 9:42 ` Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu ` (3 subsequent siblings) 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 9:42 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Alex Vesker Add two dr_actions to implement IPv6 routing extension push and remove, the new actions are multiple actions combination instead of new types. Basically, there are two modify headers plus one reformat action. Action order is the same as encap and decap actions. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 29 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 358 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 7 + drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + drivers/net/mlx5/mlx5_flow.h | 44 ++++ 6 files changed, 438 insertions(+), 3 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index a5ecce98e9..32ec3df7ef 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3586,6 +3586,7 @@ enum mlx5_ifc_header_anchors { MLX5_HEADER_ANCHOR_PACKET_START = 0x0, MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_TCP_UDP = 0x09, MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, }; diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 2e692f76c3..9e7dd9c429 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -54,6 +54,8 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_REMOVE_HEADER, MLX5DR_ACTION_TYP_DEST_ROOT, MLX5DR_ACTION_TYP_DEST_ARRAY, + MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, + MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, MLX5DR_ACTION_TYP_MAX, }; @@ -278,6 +280,11 @@ struct mlx5dr_rule_action { uint8_t *data; } reformat; + struct { + uint32_t offset; + uint8_t *header; + } ipv6_ext; + struct { rte_be32_t vlan_hdr; } push_vlan; @@ -889,6 +896,28 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, struct mlx5dr_action_remove_header_attr *attr, uint32_t flags); +/* Create action to push or remove IPv6 extension header. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] type + * Type of direct rule action: MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT or + * MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT. + * @param[in] hdr + * Header for packet reformat. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags); + /* Destroy direct rule action. * * @param[in] action diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 76ca57d302..6ac3c2f782 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -26,7 +26,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -39,6 +40,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -61,6 +63,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), @@ -75,7 +78,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -88,6 +92,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), @@ -1710,7 +1715,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && (log_bulk_size || num_of_hdrs > 1))) { - DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)", flags); + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: 0x%x)", flags); rte_errno = EINVAL; goto free_action; } @@ -2382,6 +2387,347 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, return NULL; } +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[3] = {0}; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Next_hdr will be copied to ipv6.protocol after pop done. + */ + MLX5_SET(copy_action_in, &cmd[0], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[0], length, 8); + MLX5_SET(copy_action_in, &cmd[0], src_offset, 24); + MLX5_SET(copy_action_in, &cmd[0], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[0], dst_field, mod_id); + + /* Add nop between the continuous same modify field id */ + MLX5_SET(copy_action_in, &cmd[1], action_type, MLX5_MODIFICATION_TYPE_NOP); + + /* Clear next_hdr for right checksum */ + MLX5_SET(set_action_in, &cmd[2], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[2], length, 8); + MLX5_SET(set_action_in, &cmd[2], offset, 24); + MLX5_SET(set_action_in, &cmd[2], field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Copy ipv6_route_ext[first_segment].dst_addr by flex parser to ipv6.dst_addr */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, i + 1); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + MLX5_SET(copy_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[i], dst_field, field[i]); + MLX5_SET(copy_action_in, &cmd[i], src_field, mod_id); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Restore next_hdr from seg_left for flex parser identifying */ + MLX5_SET(copy_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[4], length, 8); + MLX5_SET(copy_action_in, &cmd[4], dst_offset, 24); + MLX5_SET(copy_action_in, &cmd[4], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[4], dst_field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Copy ipv6_route_ext.next_hdr to ipv6.protocol */ + MLX5_SET(copy_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, cmd, length, 8); + MLX5_SET(copy_action_in, cmd, src_offset, 24); + MLX5_SET(copy_action_in, cmd, src_field, mod_id); + MLX5_SET(copy_action_in, cmd, dst_field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static int +mlx5dr_action_create_pop_ipv6_route_ext(struct mlx5dr_action *action) +{ + uint8_t anchor_id = flow_hw_get_ipv6_route_ext_anchor_from_ctx(action->ctx); + struct mlx5dr_action_remove_header_attr hdr_attr; + uint32_t i; + + if (!anchor_id) { + rte_errno = EINVAL; + return rte_errno; + } + + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(action); + + hdr_attr.by_anchor.decap = 1; + hdr_attr.by_anchor.start_anchor = anchor_id; + hdr_attr.by_anchor.end_anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + hdr_attr.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER; + action->ipv6_route_ext.action[3] = + mlx5dr_action_create_remove_header(action->ctx, &hdr_attr, action->flags); + + if (!action->ipv6_route_ext.action[0] || !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2] || !action->ipv6_route_ext.action[3]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext pop subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + + /* Set ipv6.protocol to IPPROTO_ROUTING */ + MLX5_SET(set_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, cmd, length, 8); + MLX5_SET(set_action_in, cmd, field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + MLX5_SET(set_action_in, cmd, data, IPPROTO_ROUTING); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, 0, + action->flags | MLX5DR_ACTION_FLAG_SHARED); +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr2(struct mlx5dr_action *action, + uint32_t bulk_size, + uint8_t *data) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + uint32_t *ipv6_dst_addr = NULL; + uint8_t seg_left, next_hdr; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + } + + /* Copy IPv6 destination address from ipv6_route_ext.last_segment */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[i], field, field[i]); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + MLX5_SET(set_action_in, &cmd[i], data, be32toh(*ipv6_dst_addr++)); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Set ipv6_route_ext.next_hdr since initially pushed as 0 for right checksum */ + MLX5_SET(set_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[4], length, 8); + MLX5_SET(set_action_in, &cmd[4], offset, 24); + MLX5_SET(set_action_in, &cmd[4], field, mod_id); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + next_hdr = MLX5_GET(header_ipv6_routing_ext, data, next_hdr); + MLX5_SET(set_action_in, &cmd[4], data, next_hdr); + } + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + bulk_size, action->flags); +} + +static int +mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, + struct mlx5dr_action_reformat_header *hdr, + uint32_t bulk_size) +{ + struct mlx5dr_action_insert_header insert_hdr = { {0} }; + uint8_t header[MLX5_PUSH_MAX_LEN]; + uint32_t i; + + if (!hdr || !hdr->sz || hdr->sz > MLX5_PUSH_MAX_LEN || + ((action->flags & MLX5DR_ACTION_FLAG_SHARED) && !hdr->data)) { + DR_LOG(ERR, "Invalid ipv6_route_ext header"); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + memcpy(header, hdr->data, hdr->sz); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + } + + insert_hdr.anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + insert_hdr.encap = 1; + insert_hdr.hdr.sz = hdr->sz; + insert_hdr.hdr.data = header; + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, + bulk_size, action->flags); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr2(action, bulk_size, hdr->data); + + if (!action->ipv6_route_ext.action[0] || + !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext push subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 extension actions is not supported"); + rte_errno = ENOTSUP; + return NULL; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) { + rte_errno = ENOMEM; + return NULL; + } + + switch (action_type) { + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) { + DR_LOG(ERR, "Pop ipv6_route_ext must be shared"); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_pop_ipv6_route_ext(action); + break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); + break; + default: + DR_LOG(ERR, "Unsupported action type %d\n", action_type); + rte_errno = ENOTSUP; + goto free_action; + } + + if (ret) { + DR_LOG(ERR, "Failed to create IPv6 extension reformat action"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) { struct mlx5dr_devx_obj *obj = NULL; @@ -2455,6 +2801,12 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(&action[i]); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + break; } } diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index e56f5b59c7..d0152dde3b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -8,6 +8,9 @@ /* Max number of STEs needed for a rule (including match) */ #define MLX5DR_ACTION_MAX_STE 10 +/* Max number of internal subactions of ipv6_ext */ +#define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 + enum mlx5dr_action_stc_idx { MLX5DR_ACTION_STC_IDX_CTRL = 0, MLX5DR_ACTION_STC_IDX_HIT = 1, @@ -143,6 +146,10 @@ struct mlx5dr_action { uint8_t offset; bool encap; } reformat; + struct { + struct mlx5dr_action + *action[MLX5DR_ACTION_IPV6_EXT_MAX_SA]; + } ipv6_route_ext; struct { struct mlx5dr_devx_obj *devx_obj; uint8_t return_reg_id; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 5111f41648..1e5ef9cf67 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -31,6 +31,8 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_CRYPTO_DECRYPT] = "CRYPTO_DECRYPT", [MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER", [MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER", + [MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT", + [MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e637c98b95..43608e15d2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -595,6 +595,7 @@ struct mlx5_flow_dv_matcher { struct mlx5_flow_dv_match_params mask; /**< Matcher mask. */ }; +#define MLX5_PUSH_MAX_LEN 128 #define MLX5_ENCAP_MAX_LEN 132 /* Encap/decap resource structure. */ @@ -2898,6 +2899,49 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused) #endif return UINT32_MAX; } + +static __rte_always_inline uint8_t +flow_hw_get_ipv6_route_ext_anchor_from_ctx(void *dr_ctx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + struct mlx5_priv *priv; + + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) + return priv->sh->srh_flex_parser.flex.devx_fp->anchor_id; + } +#else + RTE_SET_USED(dr_ctx); +#endif + return 0; +} + +static __rte_always_inline uint16_t +flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + struct mlx5_priv *priv; + struct mlx5_flex_parser_devx *fp; + + if (idx >= MLX5_GRAPH_NODE_SAMPLE_NUM || idx >= MLX5_SRV6_SAMPLE_NUM) + return 0; + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) { + fp = priv->sh->srh_flex_parser.flex.devx_fp; + return fp->sample_info[idx].modify_field_id; + } + } +#else + RTE_SET_USED(dr_ctx); + RTE_SET_USED(idx); +#endif + return 0; +} + void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev); void -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v2 4/6] net/mlx5/hws: add setter for IPv6 routing push remove 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu ` (2 preceding siblings ...) 2023-10-31 9:42 ` [PATCH v2 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu @ 2023-10-31 9:42 ` Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 5/6] net/mlx5: implement " Rongwei Liu ` (2 subsequent siblings) 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 9:42 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Alex Vesker The rte action will be translated to multiple dr_actions which need different setters to program them. In order to leverage the existing setter logic, there is a new callback introduce which called fetch_opt with unique parameter. For each setter, it may have different reparsing properties. Setter which requires no reparse can't share the same one with the one has reparse enabled even if there is spare space. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 174 +++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 3 +- 2 files changed, 176 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 6ac3c2f782..281b09a582 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -3311,6 +3311,121 @@ mlx5dr_action_setter_reformat_trailer(struct mlx5dr_actions_apply_data *apply, apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; } +static void +mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(uint8_t *data, void *mh_data) +{ + uint8_t *action_ptr = mh_data; + uint32_t *ipv6_dst_addr; + uint8_t seg_left; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list which is the next hop */ + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + + /* Load next hop IPv6 address in reverse order to ipv6.dst_address */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, action_ptr, data, be32toh(*ipv6_dst_addr++)); + action_ptr += MLX5DR_MODIFY_ACTION_SIZE; + } + + /* Set ipv6_route_ext.next_hdr per user input */ + MLX5_SET(set_action_in, action_ptr, data, *data); +} + +static void +mlx5dr_action_setter_ipv6_route_ext_mhdr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + __be64 cmd[MLX5_SRV6_SAMPLE_NUM] = {0}; + struct mlx5dr_action *ipv6_ext_action; + uint8_t *header; + + header = rule_action[setter->idx_double].ipv6_ext.header; + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.modify_header.offset = 0; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.data = NULL; + } else { + /* + * Copy ipv6_dst from ipv6_route_ext.last_seg. + * Set ipv6_route_ext.next_hdr. + */ + mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(header, cmd); + tmp_rule_action.modify_header.data = (uint8_t *)cmd; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_modify_header(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + struct mlx5dr_action *ipv6_ext_action; + uint8_t header[MLX5_PUSH_MAX_LEN]; + + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.reformat.offset = 0; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.data = NULL; + } else { + memcpy(header, rule_action[setter->idx_double].ipv6_ext.header, + tmp_rule_action.action->reformat.header_size); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + tmp_rule_action.reformat.data = header; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_insert_ptr(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_pop(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = &apply->rule_action[setter->idx_single]; + uint8_t idx = MLX5DR_ACTION_IPV6_EXT_MAX_SA - 1; + struct mlx5dr_action *action; + + /* Pop the ipv6_route_ext as set_single logic */ + action = rule_action->action->ipv6_route_ext.action[idx]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(action->stc[apply->tbl_type].offset); +} + int mlx5dr_action_template_process(struct mlx5dr_action_template *at) { struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; @@ -3374,6 +3489,65 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_double = i; break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Set ipv6_route_ext.next_hdr to 0 for checksum bug. + */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* + * Restore ipv6_route_ext.next_hdr from ipv6_route_ext.seg_left. + * Load the final destination address from flex parser sample 1->4. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* Set the ipv6.protocol per ipv6_route_ext.next_hdr */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + /* Pop ipv6_route_ext */ + setter->flags |= ASF_SINGLE1 | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_ipv6_route_ext_pop; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + /* Insert ipv6_route_ext with next_hdr as 0 due to checksum bug */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_INSERT; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_insert_ptr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* Set ipv6.protocol as IPPROTO_ROUTING: 0x2b */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* + * Load the right ipv6_route_ext.next_hdr per user input buffer. + * Load the next dest_addr from the ipv6_route_ext.seg_list[last]. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + break; + case MLX5DR_ACTION_TYP_MODIFY_HDR: /* Double modify header list */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index d0152dde3b..ce9091a336 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -6,7 +6,7 @@ #define MLX5DR_ACTION_H_ /* Max number of STEs needed for a rule (including match) */ -#define MLX5DR_ACTION_MAX_STE 10 +#define MLX5DR_ACTION_MAX_STE 20 /* Max number of internal subactions of ipv6_ext */ #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 @@ -109,6 +109,7 @@ struct mlx5dr_actions_wqe_setter { uint8_t idx_ctr; uint8_t idx_hit; uint8_t flags; + uint8_t extra_data; }; struct mlx5dr_action_template { -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v2 5/6] net/mlx5: implement IPv6 routing push remove 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu ` (3 preceding siblings ...) 2023-10-31 9:42 ` [PATCH v2 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu @ 2023-10-31 9:42 ` Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 9:42 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Remove actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com> --- doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 +- doc/guides/rel_notes/release_23_11.rst | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 282 ++++++++++++++++++++++++- 6 files changed, 309 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 0ed9a6aefc..0739fe9d63 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -108,6 +108,8 @@ flag = Y inc_tcp_ack = Y inc_tcp_seq = Y indirect_list = Y +ipv6_ext_push = Y +ipv6_ext_remove = Y jump = Y mark = Y meter = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index be5054e68a..955dedf3db 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -148,7 +148,9 @@ Features - Matching on GTP extension header with raw encap/decap action. - Matching on Geneve TLV option header with raw encap/decap action. - Matching on ESP header SPI field. +- Matching on flex item with specific pattern. - Matching on InfiniBand BTH. +- Modify flex item field. - Modify IPv4/IPv6 ECN field. - RSS support in sample action. - E-Switch mirroring and jump. @@ -166,7 +168,7 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. - +- Push or remove IPv6 routing extension. Limitations ----------- @@ -759,6 +761,13 @@ Limitations to the representor of the source virtual port (SF/VF), while if it is disabled, the traffic will be routed based on the steering rules in the ingress domain. +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + - Not supported on guest port. Statistics ---------- diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index 322d8b1e0e..78e774cf02 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -150,6 +150,8 @@ New Features * Added support for ``RTE_FLOW_ACTION_TYPE_INDIRECT_LIST`` flow action. * Added support for ``RTE_FLOW_ITEM_TYPE_PTYPE`` flow item. * Added support for ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` flow action and mirror. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` flow action. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE`` flow action. * **Updated Solarflare net driver.** diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f13a56ee9e..277bbbf407 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -373,6 +373,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 43608e15d2..c7be1f3553 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -363,6 +363,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) #define MLX5_FLOW_ACTION_QUOTA (1ull << 46) #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1269,6 +1271,8 @@ typedef int const struct rte_flow_action *, struct mlx5dr_rule_action *); +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1315,6 +1319,10 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 extension push data len. */ + uint16_t len; + } ipv6_ext; struct { uint32_t id; uint32_t conf_masked:1; @@ -1359,6 +1367,7 @@ struct rte_flow_actions_template { uint16_t *src_off; /* RTE action displacement from app. template */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */ uint32_t refcnt; /* Reference counter. */ uint8_t flex_item; /* flex item index. */ }; @@ -1384,7 +1393,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push remove action struct. */ +struct mlx5_hw_push_remove_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_remove action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1415,6 +1431,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/remove action. */ + struct mlx5_hw_push_remove_action *push_remove; + uint16_t push_remove_pos; /* Push/remove action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 977751394e..592d436099 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_remove) { + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } if (acts->mhdr) { flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); @@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len) +{ + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->ipv6_ext.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1924,6 +1968,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, return 0; } + +static int +mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + uint8_t *push_data, uint8_t *push_data_m, + size_t push_size, uint16_t recom_src, + enum mlx5dr_action_type recom_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + struct mlx5_action_construct_data *act_data; + struct mlx5dr_action_reformat_header hdr = {0}; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_remove) + push_size, + 0, SOCKET_ID_ANY); + if (!acts->push_remove) + return -ENOMEM; + + switch (recom_type) { + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!push_data || !push_size) + goto err1; + if (!push_data_m) { + bulk = rte_log2_u32(table_attr->nb_flows); + } else { + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + } + acts->push_remove->data_size = push_size; + memcpy(acts->push_remove->data, push_data, push_size); + hdr.data = push_data; + hdr.sz = push_size; + break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + break; + default: + break; + } + + acts->push_remove->action = + mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx, + recom_type, &hdr, bulk, flag); + if (!acts->push_remove->action) + goto err1; + acts->rule_acts[at->recom_off].action = acts->push_remove->action; + acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data; + acts->rule_acts[at->recom_off].ipv6_ext.offset = 0; + acts->push_remove_pos = at->recom_off; + if (!acts->push_remove->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size); + if (!act_data) + goto err; + } + return 0; +err: + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); +err1: + if (acts->push_remove) { + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } + return -EINVAL; +} + /** * Translate rte_flow actions to DR action. * @@ -1957,19 +2077,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *masks = at->masks; enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t jump_pos; uint32_t ct_idx; @@ -2175,6 +2300,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + if (ipv6_ext_data) { + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + } + recom_src = src_pos; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); @@ -2322,6 +2477,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, if (ret) goto err; } + if (recom_used) { + MLX5_ASSERT(at->recom_off != UINT16_MAX); + ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data, + push_data_m, push_size, recom_src, + recom_type); + if (ret) + goto err; + } return 0; err: err = rte_errno; @@ -2719,11 +2882,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2854,6 +3019,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, + act_data->ipv6_ext.len); + MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -3010,6 +3182,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->res_idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_remove && !hw_acts->push_remove->shared) { + rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = + job->flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -5113,6 +5290,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -5340,6 +5549,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, #endif uint16_t i; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -5436,6 +5646,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -5551,6 +5776,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, [RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, }; static inline void @@ -5648,6 +5875,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -5656,7 +5885,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -5665,8 +5895,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -5698,6 +5931,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -5770,11 +6013,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for remove anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT || + recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -6183,7 +6440,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, break; } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -6220,6 +6477,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error __rte_unused) { + uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE | + MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Action template %p is still in use.", (void *)template); @@ -6228,6 +6488,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->action_flags & flag) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -8796,6 +9058,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -8811,7 +9074,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -8831,13 +9094,16 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v2 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu ` (4 preceding siblings ...) 2023-10-31 9:42 ` [PATCH v2 5/6] net/mlx5: implement " Rongwei Liu @ 2023-10-31 9:42 ` Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 9:42 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Erez Shitrit After pushing/popping srv6 into/from IPv6 packets, the checksum needs to be correct. In order to achieve this, there is a need to control each STE' reparse behavior(CX7 and above). Add two more flags enumeration definitions to allow external control of reparse property in stc. 1. Push a. 1st STE, insert header action, reparse ignored(default reparse always) b. 2nd STE, modify IPv6 protocol, reparse always as default. c. 3rd STE, modify header list, reparse always(default reparse ignored) 2. Pop a. 1st STE, modify header list, reparse always(default reparse ignored) b. 2nd STE, modify header list, reparse always(default reparse ignored) c. 3rd STE, modify IPv6 protocol, reparse ignored(default reparse always); remove header action, reparse always as default. For CX6Lx and CX6Dx, the reparse behavior is controlled by RTC as always. Only pop action can work well. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 115 +++++++++++++++++++-------- drivers/net/mlx5/hws/mlx5dr_action.h | 7 ++ 2 files changed, 87 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 281b09a582..daeabead2a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -640,6 +640,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; if (action->modify_header.require_reparse) attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; @@ -678,9 +679,12 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: case MLX5DR_ACTION_TYP_INSERT_HEADER: + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; + if (!action->reformat.require_reparse) + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; - attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; attr->insert_header.encap = action->reformat.encap; attr->insert_header.insert_anchor = action->reformat.anchor; attr->insert_header.arg_id = action->reformat.arg_obj->id; @@ -1441,7 +1445,7 @@ static int mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, uint8_t num_of_hdrs, struct mlx5dr_action_reformat_header *hdrs, - uint32_t log_bulk_sz) + uint32_t log_bulk_sz, uint32_t reparse) { struct mlx5dr_devx_obj *arg_obj; size_t max_sz = 0; @@ -1478,6 +1482,11 @@ mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, action[i].reformat.encap = 1; } + if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT)) + action[i].reformat.require_reparse = true; + else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON) + action[i].reformat.require_reparse = true; + ret = mlx5dr_action_create_stcs(&action[i], NULL); if (ret) { DR_LOG(ERR, "Failed to create stc for reformat"); @@ -1514,7 +1523,8 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action, ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, - log_bulk_sz); + log_bulk_sz, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); if (ret) goto put_shared_stc; @@ -1657,7 +1667,8 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_action *action, ret = mlx5dr_action_create_stcs(action, NULL); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: - ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size); + ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: ret = mlx5dr_action_handle_l2_to_tunnel_l3(action, num_of_hdrs, hdrs, bulk_size); @@ -1765,7 +1776,8 @@ static int mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, uint8_t num_of_patterns, struct mlx5dr_action_mh_pattern *pattern, - uint32_t log_bulk_size) + uint32_t log_bulk_size, + uint32_t reparse) { struct mlx5dr_devx_obj *pat_obj, *arg_obj = NULL; struct mlx5dr_context *ctx = action->ctx; @@ -1799,8 +1811,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, action[i].modify_header.num_of_patterns = num_of_patterns; action[i].modify_header.max_num_of_actions = max_mh_actions; action[i].modify_header.num_of_actions = num_actions; - action[i].modify_header.require_reparse = - mlx5dr_pat_require_reparse(pattern[i].data, num_actions); + + if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT)) + action[i].modify_header.require_reparse = + mlx5dr_pat_require_reparse(pattern[i].data, num_actions); + else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON) + action[i].modify_header.require_reparse = true; if (num_actions == 1) { pat_obj = NULL; @@ -1843,12 +1859,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, return rte_errno; } -struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - uint8_t num_of_patterns, - struct mlx5dr_action_mh_pattern *patterns, - uint32_t log_bulk_size, - uint32_t flags) +static struct mlx5dr_action * +mlx5dr_action_create_modify_header_reparse(struct mlx5dr_context *ctx, + uint8_t num_of_patterns, + struct mlx5dr_action_mh_pattern *patterns, + uint32_t log_bulk_size, + uint32_t flags, uint32_t reparse) { struct mlx5dr_action *action; int ret; @@ -1896,7 +1912,8 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_modify_header_hws(action, num_of_patterns, patterns, - log_bulk_size); + log_bulk_size, + reparse); if (ret) goto free_action; @@ -1907,6 +1924,17 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + uint8_t num_of_patterns, + struct mlx5dr_action_mh_pattern *patterns, + uint32_t log_bulk_size, + uint32_t flags) +{ + return mlx5dr_action_create_modify_header_reparse(ctx, num_of_patterns, patterns, + log_bulk_size, flags, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); +} static struct mlx5dr_devx_obj * mlx5dr_action_dest_array_process_reformat(struct mlx5dr_context *ctx, enum mlx5dr_action_type type, @@ -2254,12 +2282,12 @@ mlx5dr_action_create_reformat_trailer(struct mlx5dr_context *ctx, return action; } -struct mlx5dr_action * -mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, - uint8_t num_of_hdrs, - struct mlx5dr_action_insert_header *hdrs, - uint32_t log_bulk_size, - uint32_t flags) +static struct mlx5dr_action * +mlx5dr_action_create_insert_header_reparse(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags, uint32_t reparse) { struct mlx5dr_action_reformat_header *reformat_hdrs; struct mlx5dr_action *action; @@ -2312,7 +2340,8 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, } ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, - reformat_hdrs, log_bulk_size); + reformat_hdrs, log_bulk_size, + reparse); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); goto free_reformat_hdrs; @@ -2329,6 +2358,18 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags) +{ + return mlx5dr_action_create_insert_header_reparse(ctx, num_of_hdrs, hdrs, + log_bulk_size, flags, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); +} + struct mlx5dr_action * mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, struct mlx5dr_action_remove_header_attr *attr, @@ -2422,8 +2463,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) pattern.data = cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_ON); } static void * @@ -2469,8 +2511,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) pattern.data = cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_ON); } static void * @@ -2496,8 +2539,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) pattern.data = (__be64 *)cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_OFF); } static int @@ -2644,8 +2688,9 @@ mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, insert_hdr.hdr.sz = hdr->sz; insert_hdr.hdr.data = header; action->ipv6_route_ext.action[0] = - mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, - bulk_size, action->flags); + mlx5dr_action_create_insert_header_reparse(action->ctx, 1, &insert_hdr, + bulk_size, action->flags, + MLX5DR_ACTION_STC_REPARSE_OFF); action->ipv6_route_ext.action[1] = mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); action->ipv6_route_ext.action[2] = @@ -2678,12 +2723,6 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, struct mlx5dr_action *action; int ret; - if (mlx5dr_context_cap_dynamic_reparse(ctx)) { - DR_LOG(ERR, "IPv6 extension actions is not supported"); - rte_errno = ENOTSUP; - return NULL; - } - if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); @@ -2708,6 +2747,12 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_pop_ipv6_route_ext(action); break; case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 routing extension push actions is not supported"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); break; default: diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index ce9091a336..ec6605bf7a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -65,6 +65,12 @@ enum mlx5dr_action_setter_flag { ASF_HIT = 1 << 7, }; +enum mlx5dr_action_stc_reparse { + MLX5DR_ACTION_STC_REPARSE_DEFAULT, + MLX5DR_ACTION_STC_REPARSE_ON, + MLX5DR_ACTION_STC_REPARSE_OFF, +}; + struct mlx5dr_action_default_stc { struct mlx5dr_pool_chunk nop_ctr; struct mlx5dr_pool_chunk nop_dw5; @@ -146,6 +152,7 @@ struct mlx5dr_action { uint8_t anchor; uint8_t offset; bool encap; + uint8_t require_reparse; } reformat; struct { struct mlx5dr_action -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v3 0/6] support IPv6 extension push remove 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu ` (5 preceding siblings ...) 2023-10-31 9:42 ` [PATCH v2 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu @ 2023-10-31 10:51 ` Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu ` (6 more replies) 6 siblings, 7 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 10:51 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Support IPv6 extension push/remove actions in MLX5 PMD. Routing extension is the only supported type. v3: rebase. v2: add reparse control and rebase. Rongwei Liu (6): net/mlx5: sample the srv6 last segment net/mlx5/hws: fix potential wrong errno value net/mlx5/hws: add IPv6 routing extension push remove actions net/mlx5/hws: add setter for IPv6 routing push remove net/mlx5: implement IPv6 routing push remove net/mlx5/hws: add stc reparse support for srv6 push pop doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 +- doc/guides/rel_notes/release_23_11.rst | 2 + drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 29 ++ drivers/net/mlx5/hws/mlx5dr_action.c | 621 ++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 17 +- drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + drivers/net/mlx5/mlx5.c | 41 +- drivers/net/mlx5/mlx5.h | 7 + drivers/net/mlx5/mlx5_flow.h | 65 ++- drivers/net/mlx5/mlx5_flow_hw.c | 282 ++++++++++- 12 files changed, 1033 insertions(+), 47 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v3 1/6] net/mlx5: sample the srv6 last segment 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu @ 2023-10-31 10:51 ` Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 2/6] net/mlx5/hws: fix potential wrong errno value Rongwei Liu ` (5 subsequent siblings) 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 10:51 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas When removing the IPv6 routing extension header from the packets, the destination address should be updated to the last one in the segment list. Enlarge the hardware sample scope to cover the last segment. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5.c | 41 ++++++++++++++++++++++++++++++----------- drivers/net/mlx5/mlx5.h | 6 ++++++ 2 files changed, 36 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f929d6547c..92d66e8f23 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1067,6 +1067,7 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) struct mlx5_devx_graph_node_attr node = { .modify_field_select = 0, }; + uint32_t i; uint32_t ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_common_dev_config *config = &priv->sh->cdev->config; @@ -1100,10 +1101,18 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) node.next_header_field_size = 0x8; node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; node.in[0].compare_condition_value = IPPROTO_ROUTING; - node.sample[0].flow_match_sample_en = 1; - /* First come first serve no matter inner or outer. */ - node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; - node.sample[0].flow_match_sample_offset_mode = MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* Final IPv6 address. */ + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + node.sample[i].flow_match_sample_en = 1; + node.sample[i].flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* First come first serve no matter inner or outer. */ + node.sample[i].flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.sample[i].flow_match_sample_field_base_offset = + (i + 1) * sizeof(uint32_t); /* in bytes */ + } + node.sample[0].flow_match_sample_field_base_offset = 0; node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; node.out[0].compare_condition_value = IPPROTO_TCP; node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; @@ -1116,8 +1125,8 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) goto error; } priv->sh->srh_flex_parser.flex.devx_fp->devx_obj = fp; - priv->sh->srh_flex_parser.flex.mapnum = 1; - priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 1; + priv->sh->srh_flex_parser.flex.mapnum = MLX5_SRV6_SAMPLE_NUM; + priv->sh->srh_flex_parser.flex.devx_fp->num_samples = MLX5_SRV6_SAMPLE_NUM; ret = mlx5_devx_cmd_query_parse_samples(fp, ids, priv->sh->srh_flex_parser.flex.mapnum, &priv->sh->srh_flex_parser.flex.devx_fp->anchor_id); @@ -1125,12 +1134,22 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) DRV_LOG(ERR, "Failed to query sample IDs."); goto error; } - ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[0], - &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[0]); - if (ret) { - DRV_LOG(ERR, "Failed to query sample id information."); - goto error; + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[i], + &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[i]); + if (ret) { + DRV_LOG(ERR, "Failed to query sample id %u information.", ids[i]); + goto error; + } + } + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + priv->sh->srh_flex_parser.flex.devx_fp->sample_ids[i] = ids[i]; + priv->sh->srh_flex_parser.flex.map[i].width = sizeof(uint32_t) * CHAR_BIT; + priv->sh->srh_flex_parser.flex.map[i].reg_id = i; + priv->sh->srh_flex_parser.flex.map[i].shift = + (i + 1) * sizeof(uint32_t) * CHAR_BIT; } + priv->sh->srh_flex_parser.flex.map[0].shift = 0; return 0; error: if (fp) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a20acb6ca8..f13a56ee9e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1335,6 +1335,7 @@ struct mlx5_flex_pattern_field { uint16_t shift:5; uint16_t reg_id:5; }; + #define MLX5_INVALID_SAMPLE_REG_ID 0x1F /* Port flex item context. */ @@ -1346,6 +1347,11 @@ struct mlx5_flex_item { struct mlx5_flex_pattern_field map[MLX5_FLEX_ITEM_MAPPING_NUM]; }; +/* + * Sample an IPv6 address and the first dword of SRv6 header. + * Then it is 16 + 4 = 20 bytes which is 5 dwords. + */ +#define MLX5_SRV6_SAMPLE_NUM 5 /* Mlx5 internal flex parser profile structure. */ struct mlx5_internal_flex_parser_profile { uint32_t refcnt; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v3 2/6] net/mlx5/hws: fix potential wrong errno value 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu @ 2023-10-31 10:51 ` Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu ` (4 subsequent siblings) 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 10:51 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: hamdani, Alex Vesker A valid rte_errno is desired when DR layer api returns error and it can't over-write the value set by under-layer. Fixes: a318b3d54772 ("net/mlx5/hws: support insert header action") Cc: hamdani@nvidia.com Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 59be8ae2c5..76ca57d302 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -2262,6 +2262,7 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, if (!num_of_hdrs) { DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero"); + rte_errno = EINVAL; return NULL; } @@ -2309,7 +2310,6 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, reformat_hdrs, log_bulk_size); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); - rte_errno = EINVAL; goto free_reformat_hdrs; } -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v3 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 2/6] net/mlx5/hws: fix potential wrong errno value Rongwei Liu @ 2023-10-31 10:51 ` Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu ` (3 subsequent siblings) 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 10:51 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Alex Vesker Add two dr_actions to implement IPv6 routing extension push and remove, the new actions are multiple actions combination instead of new types. Basically, there are two modify headers plus one reformat action. Action order is the same as encap and decap actions. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 29 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 358 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 7 + drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + drivers/net/mlx5/mlx5_flow.h | 44 ++++ 6 files changed, 438 insertions(+), 3 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index a5ecce98e9..32ec3df7ef 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3586,6 +3586,7 @@ enum mlx5_ifc_header_anchors { MLX5_HEADER_ANCHOR_PACKET_START = 0x0, MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_TCP_UDP = 0x09, MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, }; diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 2e692f76c3..9e7dd9c429 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -54,6 +54,8 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_REMOVE_HEADER, MLX5DR_ACTION_TYP_DEST_ROOT, MLX5DR_ACTION_TYP_DEST_ARRAY, + MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, + MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, MLX5DR_ACTION_TYP_MAX, }; @@ -278,6 +280,11 @@ struct mlx5dr_rule_action { uint8_t *data; } reformat; + struct { + uint32_t offset; + uint8_t *header; + } ipv6_ext; + struct { rte_be32_t vlan_hdr; } push_vlan; @@ -889,6 +896,28 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, struct mlx5dr_action_remove_header_attr *attr, uint32_t flags); +/* Create action to push or remove IPv6 extension header. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] type + * Type of direct rule action: MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT or + * MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT. + * @param[in] hdr + * Header for packet reformat. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags); + /* Destroy direct rule action. * * @param[in] action diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 76ca57d302..6ac3c2f782 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -26,7 +26,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -39,6 +40,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -61,6 +63,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), @@ -75,7 +78,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -88,6 +92,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), @@ -1710,7 +1715,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && (log_bulk_size || num_of_hdrs > 1))) { - DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)", flags); + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: 0x%x)", flags); rte_errno = EINVAL; goto free_action; } @@ -2382,6 +2387,347 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, return NULL; } +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[3] = {0}; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Next_hdr will be copied to ipv6.protocol after pop done. + */ + MLX5_SET(copy_action_in, &cmd[0], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[0], length, 8); + MLX5_SET(copy_action_in, &cmd[0], src_offset, 24); + MLX5_SET(copy_action_in, &cmd[0], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[0], dst_field, mod_id); + + /* Add nop between the continuous same modify field id */ + MLX5_SET(copy_action_in, &cmd[1], action_type, MLX5_MODIFICATION_TYPE_NOP); + + /* Clear next_hdr for right checksum */ + MLX5_SET(set_action_in, &cmd[2], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[2], length, 8); + MLX5_SET(set_action_in, &cmd[2], offset, 24); + MLX5_SET(set_action_in, &cmd[2], field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Copy ipv6_route_ext[first_segment].dst_addr by flex parser to ipv6.dst_addr */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, i + 1); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + MLX5_SET(copy_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[i], dst_field, field[i]); + MLX5_SET(copy_action_in, &cmd[i], src_field, mod_id); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Restore next_hdr from seg_left for flex parser identifying */ + MLX5_SET(copy_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[4], length, 8); + MLX5_SET(copy_action_in, &cmd[4], dst_offset, 24); + MLX5_SET(copy_action_in, &cmd[4], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[4], dst_field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Copy ipv6_route_ext.next_hdr to ipv6.protocol */ + MLX5_SET(copy_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, cmd, length, 8); + MLX5_SET(copy_action_in, cmd, src_offset, 24); + MLX5_SET(copy_action_in, cmd, src_field, mod_id); + MLX5_SET(copy_action_in, cmd, dst_field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static int +mlx5dr_action_create_pop_ipv6_route_ext(struct mlx5dr_action *action) +{ + uint8_t anchor_id = flow_hw_get_ipv6_route_ext_anchor_from_ctx(action->ctx); + struct mlx5dr_action_remove_header_attr hdr_attr; + uint32_t i; + + if (!anchor_id) { + rte_errno = EINVAL; + return rte_errno; + } + + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(action); + + hdr_attr.by_anchor.decap = 1; + hdr_attr.by_anchor.start_anchor = anchor_id; + hdr_attr.by_anchor.end_anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + hdr_attr.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER; + action->ipv6_route_ext.action[3] = + mlx5dr_action_create_remove_header(action->ctx, &hdr_attr, action->flags); + + if (!action->ipv6_route_ext.action[0] || !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2] || !action->ipv6_route_ext.action[3]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext pop subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + + /* Set ipv6.protocol to IPPROTO_ROUTING */ + MLX5_SET(set_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, cmd, length, 8); + MLX5_SET(set_action_in, cmd, field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + MLX5_SET(set_action_in, cmd, data, IPPROTO_ROUTING); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, 0, + action->flags | MLX5DR_ACTION_FLAG_SHARED); +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr2(struct mlx5dr_action *action, + uint32_t bulk_size, + uint8_t *data) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + uint32_t *ipv6_dst_addr = NULL; + uint8_t seg_left, next_hdr; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + } + + /* Copy IPv6 destination address from ipv6_route_ext.last_segment */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[i], field, field[i]); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + MLX5_SET(set_action_in, &cmd[i], data, be32toh(*ipv6_dst_addr++)); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Set ipv6_route_ext.next_hdr since initially pushed as 0 for right checksum */ + MLX5_SET(set_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[4], length, 8); + MLX5_SET(set_action_in, &cmd[4], offset, 24); + MLX5_SET(set_action_in, &cmd[4], field, mod_id); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + next_hdr = MLX5_GET(header_ipv6_routing_ext, data, next_hdr); + MLX5_SET(set_action_in, &cmd[4], data, next_hdr); + } + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + bulk_size, action->flags); +} + +static int +mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, + struct mlx5dr_action_reformat_header *hdr, + uint32_t bulk_size) +{ + struct mlx5dr_action_insert_header insert_hdr = { {0} }; + uint8_t header[MLX5_PUSH_MAX_LEN]; + uint32_t i; + + if (!hdr || !hdr->sz || hdr->sz > MLX5_PUSH_MAX_LEN || + ((action->flags & MLX5DR_ACTION_FLAG_SHARED) && !hdr->data)) { + DR_LOG(ERR, "Invalid ipv6_route_ext header"); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + memcpy(header, hdr->data, hdr->sz); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + } + + insert_hdr.anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + insert_hdr.encap = 1; + insert_hdr.hdr.sz = hdr->sz; + insert_hdr.hdr.data = header; + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, + bulk_size, action->flags); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr2(action, bulk_size, hdr->data); + + if (!action->ipv6_route_ext.action[0] || + !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext push subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 extension actions is not supported"); + rte_errno = ENOTSUP; + return NULL; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) { + rte_errno = ENOMEM; + return NULL; + } + + switch (action_type) { + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) { + DR_LOG(ERR, "Pop ipv6_route_ext must be shared"); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_pop_ipv6_route_ext(action); + break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); + break; + default: + DR_LOG(ERR, "Unsupported action type %d\n", action_type); + rte_errno = ENOTSUP; + goto free_action; + } + + if (ret) { + DR_LOG(ERR, "Failed to create IPv6 extension reformat action"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) { struct mlx5dr_devx_obj *obj = NULL; @@ -2455,6 +2801,12 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(&action[i]); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + break; } } diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index e56f5b59c7..d0152dde3b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -8,6 +8,9 @@ /* Max number of STEs needed for a rule (including match) */ #define MLX5DR_ACTION_MAX_STE 10 +/* Max number of internal subactions of ipv6_ext */ +#define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 + enum mlx5dr_action_stc_idx { MLX5DR_ACTION_STC_IDX_CTRL = 0, MLX5DR_ACTION_STC_IDX_HIT = 1, @@ -143,6 +146,10 @@ struct mlx5dr_action { uint8_t offset; bool encap; } reformat; + struct { + struct mlx5dr_action + *action[MLX5DR_ACTION_IPV6_EXT_MAX_SA]; + } ipv6_route_ext; struct { struct mlx5dr_devx_obj *devx_obj; uint8_t return_reg_id; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 5111f41648..1e5ef9cf67 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -31,6 +31,8 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_CRYPTO_DECRYPT] = "CRYPTO_DECRYPT", [MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER", [MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER", + [MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT", + [MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e637c98b95..43608e15d2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -595,6 +595,7 @@ struct mlx5_flow_dv_matcher { struct mlx5_flow_dv_match_params mask; /**< Matcher mask. */ }; +#define MLX5_PUSH_MAX_LEN 128 #define MLX5_ENCAP_MAX_LEN 132 /* Encap/decap resource structure. */ @@ -2898,6 +2899,49 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused) #endif return UINT32_MAX; } + +static __rte_always_inline uint8_t +flow_hw_get_ipv6_route_ext_anchor_from_ctx(void *dr_ctx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + struct mlx5_priv *priv; + + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) + return priv->sh->srh_flex_parser.flex.devx_fp->anchor_id; + } +#else + RTE_SET_USED(dr_ctx); +#endif + return 0; +} + +static __rte_always_inline uint16_t +flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + struct mlx5_priv *priv; + struct mlx5_flex_parser_devx *fp; + + if (idx >= MLX5_GRAPH_NODE_SAMPLE_NUM || idx >= MLX5_SRV6_SAMPLE_NUM) + return 0; + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) { + fp = priv->sh->srh_flex_parser.flex.devx_fp; + return fp->sample_info[idx].modify_field_id; + } + } +#else + RTE_SET_USED(dr_ctx); + RTE_SET_USED(idx); +#endif + return 0; +} + void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev); void -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v3 4/6] net/mlx5/hws: add setter for IPv6 routing push remove 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu ` (2 preceding siblings ...) 2023-10-31 10:51 ` [PATCH v3 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu @ 2023-10-31 10:51 ` Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 5/6] net/mlx5: implement " Rongwei Liu ` (2 subsequent siblings) 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 10:51 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Alex Vesker The rte action will be translated to multiple dr_actions which need different setters to program them. In order to leverage the existing setter logic, there is a new callback introduce which called fetch_opt with unique parameter. For each setter, it may have different reparsing properties. Setter which requires no reparse can't share the same one with the one has reparse enabled even if there is spare space. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 174 +++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 3 +- 2 files changed, 176 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 6ac3c2f782..281b09a582 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -3311,6 +3311,121 @@ mlx5dr_action_setter_reformat_trailer(struct mlx5dr_actions_apply_data *apply, apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; } +static void +mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(uint8_t *data, void *mh_data) +{ + uint8_t *action_ptr = mh_data; + uint32_t *ipv6_dst_addr; + uint8_t seg_left; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list which is the next hop */ + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + + /* Load next hop IPv6 address in reverse order to ipv6.dst_address */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, action_ptr, data, be32toh(*ipv6_dst_addr++)); + action_ptr += MLX5DR_MODIFY_ACTION_SIZE; + } + + /* Set ipv6_route_ext.next_hdr per user input */ + MLX5_SET(set_action_in, action_ptr, data, *data); +} + +static void +mlx5dr_action_setter_ipv6_route_ext_mhdr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + __be64 cmd[MLX5_SRV6_SAMPLE_NUM] = {0}; + struct mlx5dr_action *ipv6_ext_action; + uint8_t *header; + + header = rule_action[setter->idx_double].ipv6_ext.header; + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.modify_header.offset = 0; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.data = NULL; + } else { + /* + * Copy ipv6_dst from ipv6_route_ext.last_seg. + * Set ipv6_route_ext.next_hdr. + */ + mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(header, cmd); + tmp_rule_action.modify_header.data = (uint8_t *)cmd; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_modify_header(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + struct mlx5dr_action *ipv6_ext_action; + uint8_t header[MLX5_PUSH_MAX_LEN]; + + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.reformat.offset = 0; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.data = NULL; + } else { + memcpy(header, rule_action[setter->idx_double].ipv6_ext.header, + tmp_rule_action.action->reformat.header_size); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + tmp_rule_action.reformat.data = header; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_insert_ptr(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_pop(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = &apply->rule_action[setter->idx_single]; + uint8_t idx = MLX5DR_ACTION_IPV6_EXT_MAX_SA - 1; + struct mlx5dr_action *action; + + /* Pop the ipv6_route_ext as set_single logic */ + action = rule_action->action->ipv6_route_ext.action[idx]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(action->stc[apply->tbl_type].offset); +} + int mlx5dr_action_template_process(struct mlx5dr_action_template *at) { struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; @@ -3374,6 +3489,65 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_double = i; break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Set ipv6_route_ext.next_hdr to 0 for checksum bug. + */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* + * Restore ipv6_route_ext.next_hdr from ipv6_route_ext.seg_left. + * Load the final destination address from flex parser sample 1->4. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* Set the ipv6.protocol per ipv6_route_ext.next_hdr */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + /* Pop ipv6_route_ext */ + setter->flags |= ASF_SINGLE1 | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_ipv6_route_ext_pop; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + /* Insert ipv6_route_ext with next_hdr as 0 due to checksum bug */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_INSERT; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_insert_ptr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* Set ipv6.protocol as IPPROTO_ROUTING: 0x2b */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* + * Load the right ipv6_route_ext.next_hdr per user input buffer. + * Load the next dest_addr from the ipv6_route_ext.seg_list[last]. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + break; + case MLX5DR_ACTION_TYP_MODIFY_HDR: /* Double modify header list */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index d0152dde3b..ce9091a336 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -6,7 +6,7 @@ #define MLX5DR_ACTION_H_ /* Max number of STEs needed for a rule (including match) */ -#define MLX5DR_ACTION_MAX_STE 10 +#define MLX5DR_ACTION_MAX_STE 20 /* Max number of internal subactions of ipv6_ext */ #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 @@ -109,6 +109,7 @@ struct mlx5dr_actions_wqe_setter { uint8_t idx_ctr; uint8_t idx_hit; uint8_t flags; + uint8_t extra_data; }; struct mlx5dr_action_template { -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v3 5/6] net/mlx5: implement IPv6 routing push remove 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu ` (3 preceding siblings ...) 2023-10-31 10:51 ` [PATCH v3 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu @ 2023-10-31 10:51 ` Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 10:51 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Remove actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com> --- doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 +- doc/guides/rel_notes/release_23_11.rst | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 282 ++++++++++++++++++++++++- 6 files changed, 309 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 0ed9a6aefc..0739fe9d63 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -108,6 +108,8 @@ flag = Y inc_tcp_ack = Y inc_tcp_seq = Y indirect_list = Y +ipv6_ext_push = Y +ipv6_ext_remove = Y jump = Y mark = Y meter = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index be5054e68a..955dedf3db 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -148,7 +148,9 @@ Features - Matching on GTP extension header with raw encap/decap action. - Matching on Geneve TLV option header with raw encap/decap action. - Matching on ESP header SPI field. +- Matching on flex item with specific pattern. - Matching on InfiniBand BTH. +- Modify flex item field. - Modify IPv4/IPv6 ECN field. - RSS support in sample action. - E-Switch mirroring and jump. @@ -166,7 +168,7 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. - +- Push or remove IPv6 routing extension. Limitations ----------- @@ -759,6 +761,13 @@ Limitations to the representor of the source virtual port (SF/VF), while if it is disabled, the traffic will be routed based on the steering rules in the ingress domain. +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + - Not supported on guest port. Statistics ---------- diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index 93999893bd..5ef309ea59 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -157,6 +157,8 @@ New Features * Added support for ``RTE_FLOW_ACTION_TYPE_INDIRECT_LIST`` flow action. * Added support for ``RTE_FLOW_ITEM_TYPE_PTYPE`` flow item. * Added support for ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` flow action and mirror. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` flow action. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE`` flow action. * **Updated Solarflare net driver.** diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f13a56ee9e..277bbbf407 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -373,6 +373,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 43608e15d2..c7be1f3553 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -363,6 +363,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) #define MLX5_FLOW_ACTION_QUOTA (1ull << 46) #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1269,6 +1271,8 @@ typedef int const struct rte_flow_action *, struct mlx5dr_rule_action *); +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1315,6 +1319,10 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 extension push data len. */ + uint16_t len; + } ipv6_ext; struct { uint32_t id; uint32_t conf_masked:1; @@ -1359,6 +1367,7 @@ struct rte_flow_actions_template { uint16_t *src_off; /* RTE action displacement from app. template */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */ uint32_t refcnt; /* Reference counter. */ uint8_t flex_item; /* flex item index. */ }; @@ -1384,7 +1393,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push remove action struct. */ +struct mlx5_hw_push_remove_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_remove action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1415,6 +1431,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/remove action. */ + struct mlx5_hw_push_remove_action *push_remove; + uint16_t push_remove_pos; /* Push/remove action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 977751394e..592d436099 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_remove) { + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } if (acts->mhdr) { flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); @@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len) +{ + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->ipv6_ext.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1924,6 +1968,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, return 0; } + +static int +mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + uint8_t *push_data, uint8_t *push_data_m, + size_t push_size, uint16_t recom_src, + enum mlx5dr_action_type recom_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + struct mlx5_action_construct_data *act_data; + struct mlx5dr_action_reformat_header hdr = {0}; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_remove) + push_size, + 0, SOCKET_ID_ANY); + if (!acts->push_remove) + return -ENOMEM; + + switch (recom_type) { + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!push_data || !push_size) + goto err1; + if (!push_data_m) { + bulk = rte_log2_u32(table_attr->nb_flows); + } else { + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + } + acts->push_remove->data_size = push_size; + memcpy(acts->push_remove->data, push_data, push_size); + hdr.data = push_data; + hdr.sz = push_size; + break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + break; + default: + break; + } + + acts->push_remove->action = + mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx, + recom_type, &hdr, bulk, flag); + if (!acts->push_remove->action) + goto err1; + acts->rule_acts[at->recom_off].action = acts->push_remove->action; + acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data; + acts->rule_acts[at->recom_off].ipv6_ext.offset = 0; + acts->push_remove_pos = at->recom_off; + if (!acts->push_remove->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size); + if (!act_data) + goto err; + } + return 0; +err: + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); +err1: + if (acts->push_remove) { + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } + return -EINVAL; +} + /** * Translate rte_flow actions to DR action. * @@ -1957,19 +2077,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *masks = at->masks; enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t jump_pos; uint32_t ct_idx; @@ -2175,6 +2300,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + if (ipv6_ext_data) { + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + } + recom_src = src_pos; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); @@ -2322,6 +2477,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, if (ret) goto err; } + if (recom_used) { + MLX5_ASSERT(at->recom_off != UINT16_MAX); + ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data, + push_data_m, push_size, recom_src, + recom_type); + if (ret) + goto err; + } return 0; err: err = rte_errno; @@ -2719,11 +2882,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2854,6 +3019,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, + act_data->ipv6_ext.len); + MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -3010,6 +3182,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->res_idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_remove && !hw_acts->push_remove->shared) { + rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = + job->flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -5113,6 +5290,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -5340,6 +5549,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, #endif uint16_t i; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -5436,6 +5646,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -5551,6 +5776,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, [RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, }; static inline void @@ -5648,6 +5875,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -5656,7 +5885,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -5665,8 +5895,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -5698,6 +5931,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -5770,11 +6013,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for remove anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT || + recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -6183,7 +6440,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, break; } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -6220,6 +6477,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error __rte_unused) { + uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE | + MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Action template %p is still in use.", (void *)template); @@ -6228,6 +6488,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->action_flags & flag) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -8796,6 +9058,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -8811,7 +9074,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -8831,13 +9094,16 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v3 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu ` (4 preceding siblings ...) 2023-10-31 10:51 ` [PATCH v3 5/6] net/mlx5: implement " Rongwei Liu @ 2023-10-31 10:51 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu 6 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-10-31 10:51 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Erez Shitrit After pushing/popping srv6 into/from IPv6 packets, the checksum needs to be correct. In order to achieve this, there is a need to control each STE' reparse behavior(CX7 and above). Add two more flags enumeration definitions to allow external control of reparse property in stc. 1. Push a. 1st STE, insert header action, reparse ignored(default reparse always) b. 2nd STE, modify IPv6 protocol, reparse always as default. c. 3rd STE, modify header list, reparse always(default reparse ignored) 2. Pop a. 1st STE, modify header list, reparse always(default reparse ignored) b. 2nd STE, modify header list, reparse always(default reparse ignored) c. 3rd STE, modify IPv6 protocol, reparse ignored(default reparse always); remove header action, reparse always as default. For CX6Lx and CX6Dx, the reparse behavior is controlled by RTC as always. Only pop action can work well. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 115 +++++++++++++++++++-------- drivers/net/mlx5/hws/mlx5dr_action.h | 7 ++ 2 files changed, 87 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 281b09a582..daeabead2a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -640,6 +640,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; if (action->modify_header.require_reparse) attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; @@ -678,9 +679,12 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: case MLX5DR_ACTION_TYP_INSERT_HEADER: + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; + if (!action->reformat.require_reparse) + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; - attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; attr->insert_header.encap = action->reformat.encap; attr->insert_header.insert_anchor = action->reformat.anchor; attr->insert_header.arg_id = action->reformat.arg_obj->id; @@ -1441,7 +1445,7 @@ static int mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, uint8_t num_of_hdrs, struct mlx5dr_action_reformat_header *hdrs, - uint32_t log_bulk_sz) + uint32_t log_bulk_sz, uint32_t reparse) { struct mlx5dr_devx_obj *arg_obj; size_t max_sz = 0; @@ -1478,6 +1482,11 @@ mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, action[i].reformat.encap = 1; } + if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT)) + action[i].reformat.require_reparse = true; + else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON) + action[i].reformat.require_reparse = true; + ret = mlx5dr_action_create_stcs(&action[i], NULL); if (ret) { DR_LOG(ERR, "Failed to create stc for reformat"); @@ -1514,7 +1523,8 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action, ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, - log_bulk_sz); + log_bulk_sz, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); if (ret) goto put_shared_stc; @@ -1657,7 +1667,8 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_action *action, ret = mlx5dr_action_create_stcs(action, NULL); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: - ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size); + ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: ret = mlx5dr_action_handle_l2_to_tunnel_l3(action, num_of_hdrs, hdrs, bulk_size); @@ -1765,7 +1776,8 @@ static int mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, uint8_t num_of_patterns, struct mlx5dr_action_mh_pattern *pattern, - uint32_t log_bulk_size) + uint32_t log_bulk_size, + uint32_t reparse) { struct mlx5dr_devx_obj *pat_obj, *arg_obj = NULL; struct mlx5dr_context *ctx = action->ctx; @@ -1799,8 +1811,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, action[i].modify_header.num_of_patterns = num_of_patterns; action[i].modify_header.max_num_of_actions = max_mh_actions; action[i].modify_header.num_of_actions = num_actions; - action[i].modify_header.require_reparse = - mlx5dr_pat_require_reparse(pattern[i].data, num_actions); + + if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT)) + action[i].modify_header.require_reparse = + mlx5dr_pat_require_reparse(pattern[i].data, num_actions); + else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON) + action[i].modify_header.require_reparse = true; if (num_actions == 1) { pat_obj = NULL; @@ -1843,12 +1859,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, return rte_errno; } -struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - uint8_t num_of_patterns, - struct mlx5dr_action_mh_pattern *patterns, - uint32_t log_bulk_size, - uint32_t flags) +static struct mlx5dr_action * +mlx5dr_action_create_modify_header_reparse(struct mlx5dr_context *ctx, + uint8_t num_of_patterns, + struct mlx5dr_action_mh_pattern *patterns, + uint32_t log_bulk_size, + uint32_t flags, uint32_t reparse) { struct mlx5dr_action *action; int ret; @@ -1896,7 +1912,8 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_modify_header_hws(action, num_of_patterns, patterns, - log_bulk_size); + log_bulk_size, + reparse); if (ret) goto free_action; @@ -1907,6 +1924,17 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + uint8_t num_of_patterns, + struct mlx5dr_action_mh_pattern *patterns, + uint32_t log_bulk_size, + uint32_t flags) +{ + return mlx5dr_action_create_modify_header_reparse(ctx, num_of_patterns, patterns, + log_bulk_size, flags, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); +} static struct mlx5dr_devx_obj * mlx5dr_action_dest_array_process_reformat(struct mlx5dr_context *ctx, enum mlx5dr_action_type type, @@ -2254,12 +2282,12 @@ mlx5dr_action_create_reformat_trailer(struct mlx5dr_context *ctx, return action; } -struct mlx5dr_action * -mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, - uint8_t num_of_hdrs, - struct mlx5dr_action_insert_header *hdrs, - uint32_t log_bulk_size, - uint32_t flags) +static struct mlx5dr_action * +mlx5dr_action_create_insert_header_reparse(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags, uint32_t reparse) { struct mlx5dr_action_reformat_header *reformat_hdrs; struct mlx5dr_action *action; @@ -2312,7 +2340,8 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, } ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, - reformat_hdrs, log_bulk_size); + reformat_hdrs, log_bulk_size, + reparse); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); goto free_reformat_hdrs; @@ -2329,6 +2358,18 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags) +{ + return mlx5dr_action_create_insert_header_reparse(ctx, num_of_hdrs, hdrs, + log_bulk_size, flags, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); +} + struct mlx5dr_action * mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, struct mlx5dr_action_remove_header_attr *attr, @@ -2422,8 +2463,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) pattern.data = cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_ON); } static void * @@ -2469,8 +2511,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) pattern.data = cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_ON); } static void * @@ -2496,8 +2539,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) pattern.data = (__be64 *)cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_OFF); } static int @@ -2644,8 +2688,9 @@ mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, insert_hdr.hdr.sz = hdr->sz; insert_hdr.hdr.data = header; action->ipv6_route_ext.action[0] = - mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, - bulk_size, action->flags); + mlx5dr_action_create_insert_header_reparse(action->ctx, 1, &insert_hdr, + bulk_size, action->flags, + MLX5DR_ACTION_STC_REPARSE_OFF); action->ipv6_route_ext.action[1] = mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); action->ipv6_route_ext.action[2] = @@ -2678,12 +2723,6 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, struct mlx5dr_action *action; int ret; - if (mlx5dr_context_cap_dynamic_reparse(ctx)) { - DR_LOG(ERR, "IPv6 extension actions is not supported"); - rte_errno = ENOTSUP; - return NULL; - } - if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); @@ -2708,6 +2747,12 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_pop_ipv6_route_ext(action); break; case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 routing extension push actions is not supported"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); break; default: diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index ce9091a336..ec6605bf7a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -65,6 +65,12 @@ enum mlx5dr_action_setter_flag { ASF_HIT = 1 << 7, }; +enum mlx5dr_action_stc_reparse { + MLX5DR_ACTION_STC_REPARSE_DEFAULT, + MLX5DR_ACTION_STC_REPARSE_ON, + MLX5DR_ACTION_STC_REPARSE_OFF, +}; + struct mlx5dr_action_default_stc { struct mlx5dr_pool_chunk nop_ctr; struct mlx5dr_pool_chunk nop_dw5; @@ -146,6 +152,7 @@ struct mlx5dr_action { uint8_t anchor; uint8_t offset; bool encap; + uint8_t require_reparse; } reformat; struct { struct mlx5dr_action -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 00/13] support IPv6 push remove action 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu ` (5 preceding siblings ...) 2023-10-31 10:51 ` [PATCH v3 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 01/13] net/mlx5/hws: support insert header action Rongwei Liu ` (13 more replies) 6 siblings, 14 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Support IPv6 extension push/remove actions in MLX5 PMD. Routing extension is the only supported type. v4: add more dependancies. v3: rebase. v2: add reparse control and rebase. Alex Vesker (4): net/mlx5/hws: allow jump to TIR over FDB net/mlx5/hws: support dynamic re-parse net/mlx5/hws: dynamic re-parse for modify header net/mlx5/hws: fix incorrect re-parse on complex rules Hamdan Igbaria (2): net/mlx5/hws: support insert header action net/mlx5/hws: support remove header action Rongwei Liu (7): net/mlx5: sample the srv6 last segment net/mlx5/hws: fix potential wrong rte_errno value net/mlx5/hws: add IPv6 routing extension push remove actions net/mlx5/hws: add setter for IPv6 routing push remove net/mlx5: implement IPv6 routing push remove net/mlx5/hws: fix srv6 push compilation failure net/mlx5/hws: add stc reparse support for srv6 push pop doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 + doc/guides/rel_notes/release_23_11.rst | 2 + drivers/common/mlx5/mlx5_prm.h | 13 +- drivers/net/mlx5/hws/mlx5dr.h | 105 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 873 +++++++++++++++++++++++-- drivers/net/mlx5/hws/mlx5dr_action.h | 32 +- drivers/net/mlx5/hws/mlx5dr_cmd.c | 11 +- drivers/net/mlx5/hws/mlx5dr_cmd.h | 3 + drivers/net/mlx5/hws/mlx5dr_context.c | 15 + drivers/net/mlx5/hws/mlx5dr_context.h | 9 +- drivers/net/mlx5/hws/mlx5dr_debug.c | 4 + drivers/net/mlx5/hws/mlx5dr_internal.h | 1 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 2 + drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 41 +- drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 2 + drivers/net/mlx5/mlx5.c | 41 +- drivers/net/mlx5/mlx5.h | 7 + drivers/net/mlx5/mlx5_flow.h | 65 +- drivers/net/mlx5/mlx5_flow_hw.c | 283 +++++++- 20 files changed, 1438 insertions(+), 84 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 01/13] net/mlx5/hws: support insert header action 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 02/13] net/mlx5/hws: support remove " Rongwei Liu ` (12 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Cc: Hamdan Igbaria, Alex Vesker From: Hamdan Igbaria <hamdani@nvidia.com> Support insert header action, this will allow encap at a specific anchor and offset selected by the user. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr.h | 36 ++++++++ drivers/net/mlx5/hws/mlx5dr_action.c | 112 +++++++++++++++++++++---- drivers/net/mlx5/hws/mlx5dr_action.h | 5 +- drivers/net/mlx5/hws/mlx5dr_cmd.c | 4 +- drivers/net/mlx5/hws/mlx5dr_debug.c | 1 + drivers/net/mlx5/hws/mlx5dr_internal.h | 1 + 6 files changed, 141 insertions(+), 18 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 39d902e762..f3367297c6 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -45,6 +45,7 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_PUSH_VLAN, MLX5DR_ACTION_TYP_ASO_METER, MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_INSERT_HEADER, MLX5DR_ACTION_TYP_DEST_ROOT, MLX5DR_ACTION_TYP_DEST_ARRAY, MLX5DR_ACTION_TYP_MAX, @@ -169,6 +170,20 @@ struct mlx5dr_action_reformat_header { void *data; }; +struct mlx5dr_action_insert_header { + struct mlx5dr_action_reformat_header hdr; + /* PRM start anchor to which header will be inserted */ + uint8_t anchor; + /* Header insertion offset in bytes, from the start + * anchor to the location where new header will be inserted. + */ + uint8_t offset; + /* Indicates this header insertion adds encapsulation header to the packet, + * requiring device to update offloaded fields (for example IPv4 total length). + */ + bool encap; +}; + struct mlx5dr_action_mh_pattern { /* Byte size of modify actions provided by "data" */ size_t sz; @@ -691,6 +706,27 @@ mlx5dr_action_create_dest_root(struct mlx5dr_context *ctx, uint16_t priority, uint32_t flags); +/* Create insert header action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] num_of_hdrs + * Number of provided headers in "hdrs" array. + * @param[in] hdrs + * Headers array containing header information. + * @param[in] log_bulk_size + * Number of unique values used with this insert header. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags); + /* Destroy direct rule action. * * @param[in] action diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 11a7c58925..45e23e2d28 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -28,6 +28,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -47,6 +48,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -66,6 +68,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -555,20 +558,15 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: - attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; - attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; - attr->insert_header.encap = 1; - attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; - attr->insert_header.arg_id = action->reformat.arg_obj->id; - attr->insert_header.header_size = action->reformat.header_size; - break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_INSERT_HEADER: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; - attr->insert_header.encap = 1; - attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.encap = action->reformat.encap; + attr->insert_header.insert_anchor = action->reformat.anchor; attr->insert_header.arg_id = action->reformat.arg_obj->id; attr->insert_header.header_size = action->reformat.header_size; + attr->insert_header.insert_offset = action->reformat.offset; break; case MLX5DR_ACTION_TYP_ASO_METER: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; @@ -1246,7 +1244,7 @@ mlx5dr_action_create_reformat_root(struct mlx5dr_action *action, } static int -mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_action *action, +mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, uint8_t num_of_hdrs, struct mlx5dr_action_reformat_header *hdrs, uint32_t log_bulk_sz) @@ -1256,8 +1254,8 @@ mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_action *action, int ret, i; for (i = 0; i < num_of_hdrs; i++) { - if (hdrs[i].sz % 2 != 0) { - DR_LOG(ERR, "Header data size should be multiply of 2"); + if (hdrs[i].sz % W_SIZE != 0) { + DR_LOG(ERR, "Header data size should be in WORD granularity"); rte_errno = EINVAL; return rte_errno; } @@ -1279,6 +1277,13 @@ mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_action *action, action[i].reformat.num_of_hdrs = num_of_hdrs; action[i].reformat.max_hdr_sz = max_sz; + if (action[i].type == MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2 || + action[i].type == MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3) { + action[i].reformat.anchor = MLX5_HEADER_ANCHOR_PACKET_START; + action[i].reformat.offset = 0; + action[i].reformat.encap = 1; + } + ret = mlx5dr_action_create_stcs(&action[i], NULL); if (ret) { DR_LOG(ERR, "Failed to create stc for reformat"); @@ -1312,7 +1317,7 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action, } /* Reuse the insert with pointer for the L2L3 header */ - ret = mlx5dr_action_handle_l2_to_tunnel_l2(action, + ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, log_bulk_sz); @@ -1456,7 +1461,7 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_action *action, ret = mlx5dr_action_create_stcs(action, NULL); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: - ret = mlx5dr_action_handle_l2_to_tunnel_l2(action, num_of_hdrs, hdrs, bulk_size); + ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: ret = mlx5dr_action_handle_l2_to_tunnel_l3(action, num_of_hdrs, hdrs, bulk_size); @@ -1486,6 +1491,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, if (!num_of_hdrs) { DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero"); + rte_errno = EINVAL; return NULL; } @@ -1521,7 +1527,6 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_reformat_hws(action, num_of_hdrs, hdrs, log_bulk_size); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); - rte_errno = EINVAL; goto free_action; } @@ -1943,6 +1948,81 @@ mlx5dr_action_create_dest_root(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action_reformat_header *reformat_hdrs; + struct mlx5dr_action *action; + int i, ret; + + if (!num_of_hdrs) { + DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero"); + return NULL; + } + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Dynamic reformat action not supported over root"); + rte_errno = ENOTSUP; + return NULL; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && (log_bulk_size || num_of_hdrs > 1))) { + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: 0x%x)", flags); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic_bulk(ctx, flags, + MLX5DR_ACTION_TYP_INSERT_HEADER, + num_of_hdrs); + if (!action) + return NULL; + + reformat_hdrs = simple_calloc(num_of_hdrs, sizeof(*reformat_hdrs)); + if (!reformat_hdrs) { + DR_LOG(ERR, "Failed to allocate memory for reformat_hdrs"); + rte_errno = ENOMEM; + goto free_action; + } + + for (i = 0; i < num_of_hdrs; i++) { + if (hdrs[i].offset % W_SIZE != 0) { + DR_LOG(ERR, "Header offset should be in WORD granularity"); + rte_errno = EINVAL; + goto free_reformat_hdrs; + } + + action[i].reformat.anchor = hdrs[i].anchor; + action[i].reformat.encap = hdrs[i].encap; + action[i].reformat.offset = hdrs[i].offset; + reformat_hdrs[i].sz = hdrs[i].hdr.sz; + reformat_hdrs[i].data = hdrs[i].hdr.data; + } + + ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, + reformat_hdrs, log_bulk_size); + if (ret) { + DR_LOG(ERR, "Failed to create HWS reformat action"); + rte_errno = EINVAL; + goto free_reformat_hdrs; + } + + simple_free(reformat_hdrs); + + return action; + +free_reformat_hdrs: + simple_free(reformat_hdrs); +free_action: + simple_free(action); + return NULL; +} + static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) { struct mlx5dr_devx_obj *obj = NULL; @@ -2004,6 +2084,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(&action[i]); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); break; + case MLX5DR_ACTION_TYP_INSERT_HEADER: case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: for (i = 0; i < action->reformat.num_of_hdrs; i++) mlx5dr_action_destroy_stcs(&action[i]); @@ -2547,6 +2628,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_single = i; break; + case MLX5DR_ACTION_TYP_INSERT_HEADER: case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: /* Double insert header with pointer */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 582a38bebc..593a7f3817 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -131,8 +131,11 @@ struct mlx5dr_action { struct { struct mlx5dr_devx_obj *arg_obj; uint32_t header_size; - uint8_t num_of_hdrs; uint16_t max_hdr_sz; + uint8_t num_of_hdrs; + uint8_t anchor; + uint8_t offset; + bool encap; } reformat; struct { struct mlx5dr_devx_obj *devx_obj; diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c index c52cdd0767..f24651041c 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.c +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -492,9 +492,9 @@ mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr, stc_attr->insert_header.insert_anchor); /* HW gets the next 2 sizes in words */ MLX5_SET(stc_ste_param_insert, stc_parm, insert_size, - stc_attr->insert_header.header_size / 2); + stc_attr->insert_header.header_size / W_SIZE); MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset, - stc_attr->insert_header.insert_offset / 2); + stc_attr->insert_header.insert_offset / W_SIZE); MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument, stc_attr->insert_header.arg_id); break; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index e7b1f2cc32..a04dfbb97a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -24,6 +24,7 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", [MLX5DR_ACTION_TYP_DEST_ROOT] = "DEST_ROOT", [MLX5DR_ACTION_TYP_DEST_ARRAY] = "DEST_ARRAY", + [MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h index 021d599a56..b9efdc4a9a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_internal.h +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -40,6 +40,7 @@ #include "mlx5dr_pat_arg.h" #include "mlx5dr_crc32.h" +#define W_SIZE 2 #define DW_SIZE 4 #define BITS_IN_BYTE 8 #define BITS_IN_DW (BITS_IN_BYTE * DW_SIZE) -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 02/13] net/mlx5/hws: support remove header action 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 01/13] net/mlx5/hws: support insert header action Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 03/13] net/mlx5/hws: allow jump to TIR over FDB Rongwei Liu ` (11 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Cc: Hamdan Igbaria, Alex Vesker From: Hamdan Igbaria <hamdani@nvidia.com> Support remove header action, this action will allow the user to execute dynamic decaps by choosing to decap by providing a start anchor and number of words to remove, or providing a start anchor and end anchor. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr.h | 40 +++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.c | 77 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 7 +++ drivers/net/mlx5/hws/mlx5dr_debug.c | 1 + 4 files changed, 125 insertions(+) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index f3367297c6..e21dcd72ae 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -46,6 +46,7 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_ASO_METER, MLX5DR_ACTION_TYP_ASO_CT, MLX5DR_ACTION_TYP_INSERT_HEADER, + MLX5DR_ACTION_TYP_REMOVE_HEADER, MLX5DR_ACTION_TYP_DEST_ROOT, MLX5DR_ACTION_TYP_DEST_ARRAY, MLX5DR_ACTION_TYP_MAX, @@ -184,6 +185,29 @@ struct mlx5dr_action_insert_header { bool encap; }; +enum mlx5dr_action_remove_header_type { + MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_OFFSET, + MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER, +}; + +struct mlx5dr_action_remove_header_attr { + enum mlx5dr_action_remove_header_type type; + union { + struct { + /* PRM start anchor from which header will be removed */ + uint8_t start_anchor; + /* PRM end anchor till which header will be removed */ + uint8_t end_anchor; + bool decap; + } by_anchor; + struct { + /* PRM start anchor from which header will be removed */ + uint8_t start_anchor; + uint8_t size; + } by_offset; + }; +}; + struct mlx5dr_action_mh_pattern { /* Byte size of modify actions provided by "data" */ size_t sz; @@ -727,6 +751,22 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, uint32_t log_bulk_size, uint32_t flags); +/* Create remove header action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] attr + * attributes: specifies the remove header type, PRM start anchor and + * the PRM end anchor or the PRM start anchor and remove size in bytes. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, + struct mlx5dr_action_remove_header_attr *attr, + uint32_t flags); + /* Destroy direct rule action. * * @param[in] action diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 45e23e2d28..f794d6cd78 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -7,6 +7,8 @@ #define WIRE_PORT 0xFFFF #define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1 +/* Header removal size limited to 128B (64 words) */ +#define MLX5DR_ACTION_REMOVE_HEADER_MAX_SIZE 128 /* This is the maximum allowed action order for each table type: * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, Term @@ -18,6 +20,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = { [MLX5DR_TABLE_TYPE_NIC_RX] = { BIT(MLX5DR_ACTION_TYP_TAG), + BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), BIT(MLX5DR_ACTION_TYP_POP_VLAN), @@ -58,6 +61,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_LAST), }, [MLX5DR_TABLE_TYPE_FDB] = { + BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), BIT(MLX5DR_ACTION_TYP_POP_VLAN), @@ -603,6 +607,19 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS; attr->insert_header.header_size = MLX5DR_ACTION_HDR_LEN_L2_VLAN; break; + case MLX5DR_ACTION_TYP_REMOVE_HEADER: + if (action->remove_header.type == MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER) { + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + attr->remove_header.decap = action->remove_header.decap; + attr->remove_header.start_anchor = action->remove_header.start_anchor; + attr->remove_header.end_anchor = action->remove_header.end_anchor; + } else { + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + attr->remove_words.start_anchor = action->remove_header.start_anchor; + attr->remove_words.num_of_words = action->remove_header.num_of_words; + } + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + break; default: DR_LOG(ERR, "Invalid action type %d", action->type); assert(false); @@ -2023,6 +2040,64 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, + struct mlx5dr_action_remove_header_attr *attr, + uint32_t flags) +{ + struct mlx5dr_action *action; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Remove header action not supported over root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_REMOVE_HEADER); + if (!action) + return NULL; + + switch (attr->type) { + case MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER: + action->remove_header.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER; + action->remove_header.start_anchor = attr->by_anchor.start_anchor; + action->remove_header.end_anchor = attr->by_anchor.end_anchor; + action->remove_header.decap = attr->by_anchor.decap; + break; + case MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_OFFSET: + if (attr->by_offset.size % W_SIZE != 0) { + DR_LOG(ERR, "Invalid size, HW supports header remove in WORD granularity"); + rte_errno = EINVAL; + goto free_action; + } + + if (attr->by_offset.size > MLX5DR_ACTION_REMOVE_HEADER_MAX_SIZE) { + DR_LOG(ERR, "Header removal size limited to %u bytes", + MLX5DR_ACTION_REMOVE_HEADER_MAX_SIZE); + rte_errno = EINVAL; + goto free_action; + } + + action->remove_header.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_OFFSET; + action->remove_header.start_anchor = attr->by_offset.start_anchor; + action->remove_header.num_of_words = attr->by_offset.size / W_SIZE; + break; + default: + DR_LOG(ERR, "Unsupported remove header type %u", attr->type); + rte_errno = ENOTSUP; + goto free_action; + } + + if (mlx5dr_action_create_stcs(action, NULL)) + goto free_action; + + return action; + +free_action: + simple_free(action); + return NULL; +} + static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) { struct mlx5dr_devx_obj *obj = NULL; @@ -2043,6 +2118,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) case MLX5DR_ACTION_TYP_ASO_METER: case MLX5DR_ACTION_TYP_ASO_CT: case MLX5DR_ACTION_TYP_PUSH_VLAN: + case MLX5DR_ACTION_TYP_REMOVE_HEADER: mlx5dr_action_destroy_stcs(action); break; case MLX5DR_ACTION_TYP_DEST_ROOT: @@ -2620,6 +2696,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_double = i; break; + case MLX5DR_ACTION_TYP_REMOVE_HEADER: case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2: /* Single remove header to header */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 593a7f3817..33a674906e 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -159,6 +159,13 @@ struct mlx5dr_action { size_t num_dest; struct mlx5dr_cmd_set_fte_dest *dest_list; } dest_array; + struct { + uint8_t type; + uint8_t start_anchor; + uint8_t end_anchor; + uint8_t num_of_words; + bool decap; + } remove_header; }; }; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index a04dfbb97a..6607daaa63 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -25,6 +25,7 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_DEST_ROOT] = "DEST_ROOT", [MLX5DR_ACTION_TYP_DEST_ARRAY] = "DEST_ARRAY", [MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER", + [MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 03/13] net/mlx5/hws: allow jump to TIR over FDB 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 01/13] net/mlx5/hws: support insert header action Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 02/13] net/mlx5/hws: support remove " Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 04/13] net/mlx5/hws: support dynamic re-parse Rongwei Liu ` (10 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Cc: Alex Vesker, Erez Shitrit From: Alex Vesker <valex@nvidia.com> Current TIR action is allowed to be used only for NIC RX, this will allow TIR action over FDB for RX traffic in case of TX traffic packets will be dropped. Signed-off-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 2 ++ drivers/net/mlx5/hws/mlx5dr_action.c | 17 ++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_cmd.c | 4 ++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 1 + 4 files changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 2b499666f8..5259031a04 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -2418,6 +2418,8 @@ struct mlx5_ifc_wqe_based_flow_table_cap_bits { u8 reserved_at_180[0x10]; u8 ste_format_gen_wqe[0x10]; u8 linear_match_definer_reg_c3[0x20]; + u8 fdb_jump_to_tir_stc[0x1]; + u8 reserved_at_1c1[0x1f]; }; union mlx5_ifc_hca_cap_union_bits { diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index f794d6cd78..1bace23c58 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -389,7 +389,15 @@ mlx5dr_action_fixup_stc_attr(struct mlx5dr_context *ctx, } use_fixup = true; break; - + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR: + /* TIR is allowed on RX side, requires mask in case of FDB */ + if (fw_tbl_type == FS_FT_FDB_TX) { + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + fixup_stc_attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + use_fixup = true; + } + break; default: break; } @@ -859,6 +867,13 @@ mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, return NULL; } + if ((flags & MLX5DR_ACTION_FLAG_ROOT_FDB) || + (flags & MLX5DR_ACTION_FLAG_HWS_FDB && !ctx->caps->fdb_tir_stc)) { + DR_LOG(ERR, "TIR action not support on FDB"); + rte_errno = ENOTSUP; + return NULL; + } + if (!is_local) { DR_LOG(ERR, "TIR should be created on local ibv_device, flags: 0x%x", flags); diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c index f24651041c..a07378bc42 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.c +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -1259,6 +1259,10 @@ int mlx5dr_cmd_query_caps(struct ibv_context *ctx, caps->supp_ste_format_gen_wqe = MLX5_GET(query_hca_cap_out, out, capability.wqe_based_flow_table_cap. ste_format_gen_wqe); + + caps->fdb_tir_stc = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + fdb_jump_to_tir_stc); } if (caps->eswitch_manager) { diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h index 03db62e2e2..2b44f0e1f2 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.h +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -236,6 +236,7 @@ struct mlx5dr_cmd_query_caps { uint8_t log_header_modify_argument_granularity; uint8_t log_header_modify_argument_max_alloc; uint8_t sq_ts_format; + uint8_t fdb_tir_stc; uint64_t definer_format_sup; uint32_t trivial_match_definer; uint32_t vhca_id; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 04/13] net/mlx5/hws: support dynamic re-parse 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (2 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 03/13] net/mlx5/hws: allow jump to TIR over FDB Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 05/13] net/mlx5/hws: dynamic re-parse for modify header Rongwei Liu ` (9 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Cc: Alex Vesker, Erez Shitrit From: Alex Vesker <valex@nvidia.com> Each steering entry (STE) has a bit called re-parse used for re-parsing the packet in HW, re-parsing is needed after reformat (e.g. push/pop/encapsulate/...) or when modifying the packet headers requiring structure change (e.g. TCP to UDP). Until now we re-parsed the packet in each STE leading to longer processing per packet. With supported devices we can control re-parse bit to allow better performance. Signed-off-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 10 ++++- drivers/net/mlx5/hws/mlx5dr_action.c | 57 +++++++++++++++++---------- drivers/net/mlx5/hws/mlx5dr_action.h | 2 +- drivers/net/mlx5/hws/mlx5dr_cmd.c | 3 +- drivers/net/mlx5/hws/mlx5dr_cmd.h | 2 + drivers/net/mlx5/hws/mlx5dr_context.c | 15 +++++++ drivers/net/mlx5/hws/mlx5dr_context.h | 9 ++++- drivers/net/mlx5/hws/mlx5dr_matcher.c | 2 + 8 files changed, 74 insertions(+), 26 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 5259031a04..15dbf1a0cb 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3445,6 +3445,7 @@ enum mlx5_ifc_rtc_ste_format { enum mlx5_ifc_rtc_reparse_mode { MLX5_IFC_RTC_REPARSE_NEVER = 0x0, MLX5_IFC_RTC_REPARSE_ALWAYS = 0x1, + MLX5_IFC_RTC_REPARSE_BY_STC = 0x2, }; #define MLX5_IFC_RTC_LINEAR_LOOKUP_TBL_LOG_MAX 16 @@ -3512,6 +3513,12 @@ enum mlx5_ifc_stc_action_type { MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK = 0x86, }; +enum mlx5_ifc_stc_reparse_mode { + MLX5_IFC_STC_REPARSE_IGNORE = 0x0, + MLX5_IFC_STC_REPARSE_NEVER = 0x1, + MLX5_IFC_STC_REPARSE_ALWAYS = 0x2, +}; + struct mlx5_ifc_stc_ste_param_ste_table_bits { u8 ste_obj_id[0x20]; u8 match_definer_id[0x20]; @@ -3623,7 +3630,8 @@ enum { struct mlx5_ifc_stc_bits { u8 modify_field_select[0x40]; - u8 reserved_at_40[0x48]; + u8 reserved_at_40[0x46]; + u8 reparse_mode[0x2]; u8 table_type[0x8]; u8 ste_action_offset[0x8]; u8 action_type[0x8]; diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 1bace23c58..85987fe2ea 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -107,16 +107,18 @@ static int mlx5dr_action_get_shared_stc_nic(struct mlx5dr_context *ctx, goto unlock_and_out; } switch (stc_type) { - case MLX5DR_CONTEXT_SHARED_STC_DECAP: + case MLX5DR_CONTEXT_SHARED_STC_DECAP_L3: stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; stc_attr.remove_header.decap = 0; stc_attr.remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; stc_attr.remove_header.end_anchor = MLX5_HEADER_ANCHOR_IPV6_IPV4; break; - case MLX5DR_CONTEXT_SHARED_STC_POP: + case MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP: stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; stc_attr.remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; stc_attr.remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN; break; @@ -424,6 +426,11 @@ int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, } stc_attr->stc_offset = stc->offset; + + /* Dynamic reparse not supported, overwrite and use default */ + if (!mlx5dr_context_cap_dynamic_reparse(ctx)) + stc_attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; + devx_obj_0 = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); /* According to table/action limitation change the stc_attr */ @@ -512,6 +519,8 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, struct mlx5dr_devx_obj *obj, struct mlx5dr_cmd_stc_modify_attr *attr) { + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; + switch (action->type) { case MLX5DR_ACTION_TYP_TAG: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TAG; @@ -538,6 +547,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; if (action->modify_header.num_of_actions == 1) { attr->modify_action.data = action->modify_header.single_action; attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data); @@ -565,6 +575,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; attr->remove_header.decap = 1; attr->remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; @@ -574,6 +585,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_INSERT_HEADER: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; attr->insert_header.encap = action->reformat.encap; attr->insert_header.insert_anchor = action->reformat.anchor; attr->insert_header.arg_id = action->reformat.arg_obj->id; @@ -603,12 +615,14 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_POP_VLAN: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; attr->remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; attr->remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN / 2; break; case MLX5DR_ACTION_TYP_PUSH_VLAN: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; attr->insert_header.encap = 0; attr->insert_header.is_inline = 1; attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; @@ -627,6 +641,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->remove_words.num_of_words = action->remove_header.num_of_words; } attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; break; default: DR_LOG(ERR, "Invalid action type %d", action->type); @@ -1171,7 +1186,7 @@ mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags) if (!action) return NULL; - ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP); if (ret) { DR_LOG(ERR, "Failed to create remove stc for reformat"); goto free_action; @@ -1186,7 +1201,7 @@ mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags) return action; free_shared: - mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP); free_action: simple_free(action); return NULL; @@ -1342,7 +1357,7 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action, int ret; /* The action is remove-l2-header + insert-l3-header */ - ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP_L3); if (ret) { DR_LOG(ERR, "Failed to create remove stc for reformat"); return ret; @@ -1359,7 +1374,7 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action, return 0; put_shared_stc: - mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP_L3); return ret; } @@ -2142,7 +2157,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) break; case MLX5DR_ACTION_TYP_POP_VLAN: mlx5dr_action_destroy_stcs(action); - mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP); break; case MLX5DR_ACTION_TYP_DEST_ARRAY: mlx5dr_action_destroy_stcs(action); @@ -2170,7 +2185,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_cmd_destroy_obj(obj); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: - mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP_L3); for (i = 0; i < action->reformat.num_of_hdrs; i++) mlx5dr_action_destroy_stcs(&action[i]); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); @@ -2230,6 +2245,7 @@ int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_NOP; stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW0; + stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, &default_stc->nop_ctr); if (ret) { @@ -2594,7 +2610,7 @@ mlx5dr_action_setter_single_double_pop(struct mlx5dr_actions_apply_data *apply, apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, - MLX5DR_CONTEXT_SHARED_STC_POP)); + MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP)); } static void @@ -2629,7 +2645,7 @@ mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply, apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, - MLX5DR_CONTEXT_SHARED_STC_DECAP)); + MLX5DR_CONTEXT_SHARED_STC_DECAP_L3)); } int mlx5dr_action_template_process(struct mlx5dr_action_template *at) @@ -2680,8 +2696,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) pop_setter->set_single = &mlx5dr_action_setter_single_double_pop; break; } - setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); - setter->flags |= ASF_SINGLE1 | ASF_REPARSE | ASF_REMOVE; + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY | ASF_INSERT); + setter->flags |= ASF_SINGLE1 | ASF_REMOVE; setter->set_single = &mlx5dr_action_setter_single; setter->idx_single = i; pop_setter = setter; @@ -2690,7 +2706,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) case MLX5DR_ACTION_TYP_PUSH_VLAN: /* Double insert inline */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); - setter->flags |= ASF_DOUBLE | ASF_REPARSE | ASF_MODIFY; + setter->flags |= ASF_DOUBLE | ASF_INSERT; setter->set_double = &mlx5dr_action_setter_push_vlan; setter->idx_double = i; break; @@ -2698,7 +2714,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) case MLX5DR_ACTION_TYP_MODIFY_HDR: /* Double modify header list */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); - setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->flags |= ASF_DOUBLE | ASF_MODIFY; setter->set_double = &mlx5dr_action_setter_modify_header; setter->idx_double = i; break; @@ -2715,7 +2731,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2: /* Single remove header to header */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); - setter->flags |= ASF_SINGLE1 | ASF_REMOVE | ASF_REPARSE; + setter->flags |= ASF_SINGLE1 | ASF_REMOVE; setter->set_single = &mlx5dr_action_setter_single; setter->idx_single = i; break; @@ -2723,8 +2739,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) case MLX5DR_ACTION_TYP_INSERT_HEADER: case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: /* Double insert header with pointer */ - setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); - setter->flags |= ASF_DOUBLE | ASF_REPARSE; + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_INSERT; setter->set_double = &mlx5dr_action_setter_insert_ptr; setter->idx_double = i; break; @@ -2732,7 +2748,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: /* Single remove + Double insert header with pointer */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_DOUBLE); - setter->flags |= ASF_SINGLE1 | ASF_DOUBLE | ASF_REPARSE | ASF_REMOVE; + setter->flags |= ASF_SINGLE1 | ASF_DOUBLE; setter->set_double = &mlx5dr_action_setter_insert_ptr; setter->idx_double = i; setter->set_single = &mlx5dr_action_setter_common_decap; @@ -2741,9 +2757,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: /* Double modify header list with remove and push inline */ - setter = mlx5dr_action_setter_find_first(last_setter, - ASF_DOUBLE | ASF_REMOVE); - setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_INSERT; setter->set_double = &mlx5dr_action_setter_tnl_l3_to_l2; setter->idx_double = i; break; diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 33a674906e..4bd3d3b26b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -52,7 +52,7 @@ enum mlx5dr_action_setter_flag { ASF_SINGLE2 = 1 << 1, ASF_SINGLE3 = 1 << 2, ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3, - ASF_REPARSE = 1 << 3, + ASF_INSERT = 1 << 3, ASF_REMOVE = 1 << 4, ASF_MODIFY = 1 << 5, ASF_CTR = 1 << 6, diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c index a07378bc42..876a47147d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.c +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -394,7 +394,7 @@ mlx5dr_cmd_rtc_create(struct ibv_context *ctx, MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); - MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + MLX5_SET(rtc, attr, reparse_mode, rtc_attr->reparse_mode); devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); if (!devx_obj->obj) { @@ -570,6 +570,7 @@ mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, attr = MLX5_ADDR_OF(create_stc_in, in, stc); MLX5_SET(stc, attr, ste_action_offset, stc_attr->action_offset); MLX5_SET(stc, attr, action_type, stc_attr->action_type); + MLX5_SET(stc, attr, reparse_mode, stc_attr->reparse_mode); MLX5_SET64(stc, attr, modify_field_select, MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC); diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h index 2b44f0e1f2..18c2b07fc8 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.h +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -79,6 +79,7 @@ struct mlx5dr_cmd_rtc_create_attr { uint8_t table_type; uint8_t match_definer_0; uint8_t match_definer_1; + uint8_t reparse_mode; bool is_frst_jumbo; bool is_scnd_range; }; @@ -98,6 +99,7 @@ struct mlx5dr_cmd_stc_create_attr { struct mlx5dr_cmd_stc_modify_attr { uint32_t stc_offset; uint8_t action_offset; + uint8_t reparse_mode; enum mlx5_ifc_stc_action_type action_type; union { uint32_t id; /* TIRN, TAG, FT ID, STE ID */ diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c index 08a5ee92a5..15d53c578a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_context.c +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -4,6 +4,21 @@ #include "mlx5dr_internal.h" +bool mlx5dr_context_cap_dynamic_reparse(struct mlx5dr_context *ctx) +{ + return IS_BIT_SET(ctx->caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_BY_STC); +} + +uint8_t mlx5dr_context_get_reparse_mode(struct mlx5dr_context *ctx) +{ + /* Prefer to use dynamic reparse, reparse only specific actions */ + if (mlx5dr_context_cap_dynamic_reparse(ctx)) + return MLX5_IFC_RTC_REPARSE_NEVER; + + /* Otherwise use less efficient static */ + return MLX5_IFC_RTC_REPARSE_ALWAYS; +} + static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) { struct mlx5dr_pool_attr pool_attr = {0}; diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h index 0ba8d0c92e..f476c2308c 100644 --- a/drivers/net/mlx5/hws/mlx5dr_context.h +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -11,8 +11,8 @@ enum mlx5dr_context_flags { }; enum mlx5dr_context_shared_stc_type { - MLX5DR_CONTEXT_SHARED_STC_DECAP = 0, - MLX5DR_CONTEXT_SHARED_STC_POP = 1, + MLX5DR_CONTEXT_SHARED_STC_DECAP_L3 = 0, + MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP = 1, MLX5DR_CONTEXT_SHARED_STC_MAX = 2, }; @@ -60,4 +60,9 @@ mlx5dr_context_get_local_ibv(struct mlx5dr_context *ctx) return ctx->ibv_ctx; } + +bool mlx5dr_context_cap_dynamic_reparse(struct mlx5dr_context *ctx); + +uint8_t mlx5dr_context_get_reparse_mode(struct mlx5dr_context *ctx); + #endif /* MLX5DR_CONTEXT_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index ebe42c44c6..35701a4e2c 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -562,6 +562,7 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, rtc_attr.pd = ctx->pd_num; rtc_attr.ste_base = devx_obj->id; rtc_attr.ste_offset = ste->offset; + rtc_attr.reparse_mode = mlx5dr_context_get_reparse_mode(ctx); rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false); mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, rtc_type, false); @@ -764,6 +765,7 @@ static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) /* Allocate STC for jumps to STE */ stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_NEVER; stc_attr.ste_table.ste = matcher->action_ste.ste; stc_attr.ste_table.ste_pool = matcher->action_ste.pool; stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 05/13] net/mlx5/hws: dynamic re-parse for modify header 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (3 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 04/13] net/mlx5/hws: support dynamic re-parse Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 06/13] net/mlx5/hws: fix incorrect re-parse on complex rules Rongwei Liu ` (8 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Cc: Alex Vesker, Erez Shitrit From: Alex Vesker <valex@nvidia.com> With dynamic re-parse we would always require re-parse but this is not always necessary. Re-parse is only needed when the packet structure is changed. This support will allow dynamically deciding based on the action pattern if re-parse is required or no. Signed-off-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 15 +++++++--- drivers/net/mlx5/hws/mlx5dr_action.h | 1 + drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 41 +++++++++++++++++++++++++-- drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 2 ++ 4 files changed, 53 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 85987fe2ea..98bb556b7b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -547,7 +547,9 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; - attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; + if (action->modify_header.require_reparse) + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; + if (action->modify_header.num_of_actions == 1) { attr->modify_action.data = action->modify_header.single_action; attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data); @@ -1474,6 +1476,8 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_action *action, action[i].modify_header.num_of_actions = num_of_actions; action[i].modify_header.arg_obj = arg_obj; action[i].modify_header.pat_obj = pat_obj; + action[i].modify_header.require_reparse = + mlx5dr_pat_require_reparse((__be64 *)mh_data, num_of_actions); ret = mlx5dr_action_create_stcs(&action[i], NULL); if (ret) { @@ -1620,7 +1624,7 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, { struct mlx5dr_devx_obj *pat_obj, *arg_obj = NULL; struct mlx5dr_context *ctx = action->ctx; - uint16_t max_mh_actions = 0; + uint16_t num_actions, max_mh_actions = 0; int i, ret; /* Calculate maximum number of mh actions for shared arg allocation */ @@ -1646,11 +1650,14 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, goto free_stc_and_pat; } + num_actions = pattern[i].sz / MLX5DR_MODIFY_ACTION_SIZE; action[i].modify_header.num_of_patterns = num_of_patterns; action[i].modify_header.max_num_of_actions = max_mh_actions; - action[i].modify_header.num_of_actions = pattern[i].sz / MLX5DR_MODIFY_ACTION_SIZE; + action[i].modify_header.num_of_actions = num_actions; + action[i].modify_header.require_reparse = + mlx5dr_pat_require_reparse(pattern[i].data, num_actions); - if (action[i].modify_header.num_of_actions == 1) { + if (num_actions == 1) { pat_obj = NULL; /* Optimize single modify action to be used inline */ action[i].modify_header.single_action = pattern[i].data[0]; diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 4bd3d3b26b..7e5063b57e 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -127,6 +127,7 @@ struct mlx5dr_action { uint8_t single_action_type; uint8_t num_of_actions; uint8_t max_num_of_actions; + uint8_t require_reparse; } modify_header; struct { struct mlx5dr_devx_obj *arg_obj; diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c index 349d77f296..a949844d24 100644 --- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -37,6 +37,43 @@ uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions) return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions)); } +bool mlx5dr_pat_require_reparse(__be64 *actions, uint16_t num_of_actions) +{ + uint16_t i, field; + uint8_t action_id; + + for (i = 0; i < num_of_actions; i++) { + action_id = MLX5_GET(set_action_in, &actions[i], action_type); + + switch (action_id) { + case MLX5_MODIFICATION_TYPE_NOP: + field = MLX5_MODI_OUT_NONE; + break; + + case MLX5_MODIFICATION_TYPE_SET: + case MLX5_MODIFICATION_TYPE_ADD: + field = MLX5_GET(set_action_in, &actions[i], field); + break; + + case MLX5_MODIFICATION_TYPE_COPY: + case MLX5_MODIFICATION_TYPE_ADD_FIELD: + field = MLX5_GET(copy_action_in, &actions[i], dst_field); + break; + + default: + /* Insert/Remove/Unknown actions require reparse */ + return true; + } + + /* Below fields can change packet structure require a reparse */ + if (field == MLX5_MODI_OUT_ETHERTYPE || + field == MLX5_MODI_OUT_IPV6_NEXT_HDR) + return true; + } + + return false; +} + /* Cache and cache element handling */ int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache) { @@ -228,8 +265,8 @@ mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx, } pat_obj = mlx5dr_cmd_header_modify_pattern_create(ctx->ibv_ctx, - pattern_sz, - (uint8_t *)pattern); + pattern_sz, + (uint8_t *)pattern); if (!pat_obj) { DR_LOG(ERR, "Failed to create pattern FW object"); goto out_unlock; diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h index 2a38891c4d..bbe313102f 100644 --- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h @@ -79,6 +79,8 @@ void mlx5dr_pat_put_pattern(struct mlx5dr_context *ctx, bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, uint32_t arg_size); +bool mlx5dr_pat_require_reparse(__be64 *actions, uint16_t num_of_actions); + void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, void *comp_data, uint32_t arg_idx, -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 06/13] net/mlx5/hws: fix incorrect re-parse on complex rules 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (4 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 05/13] net/mlx5/hws: dynamic re-parse for modify header Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 07/13] net/mlx5: sample the srv6 last segment Rongwei Liu ` (7 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Cc: Alex Vesker, Erez Shitrit From: Alex Vesker <valex@nvidia.com> The re-parse value when jumping to action STEs was set to NEVER leading to cases in which 2 STCs accessed the re-parse bit causing hw_syndrome (0x3d). The solution is to use set re-parse mode to IGNORE in such case. Fixes: 2b3d3097a10a ("net/mlx5/hws: support dynamic re-parse") Signed-off-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 35701a4e2c..4ea161eae6 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -765,7 +765,7 @@ static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) /* Allocate STC for jumps to STE */ stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; - stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_NEVER; + stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; stc_attr.ste_table.ste = matcher->action_ste.ste; stc_attr.ste_table.ste_pool = matcher->action_ste.pool; stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 07/13] net/mlx5: sample the srv6 last segment 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (5 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 06/13] net/mlx5/hws: fix incorrect re-parse on complex rules Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 08/13] net/mlx5/hws: fix potential wrong rte_errno value Rongwei Liu ` (6 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas When removing the IPv6 routing extension header from the packets, the destination address should be updated to the last one in the segment list. Enlarge the hardware sample scope to cover the last segment. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5.c | 41 ++++++++++++++++++++++++++++++----------- drivers/net/mlx5/mlx5.h | 6 ++++++ 2 files changed, 36 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index c275cdfee8..e3e36098c2 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1070,6 +1070,7 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) struct mlx5_devx_graph_node_attr node = { .modify_field_select = 0, }; + uint32_t i; uint32_t ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_common_dev_config *config = &priv->sh->cdev->config; @@ -1103,10 +1104,18 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) node.next_header_field_size = 0x8; node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; node.in[0].compare_condition_value = IPPROTO_ROUTING; - node.sample[0].flow_match_sample_en = 1; - /* First come first serve no matter inner or outer. */ - node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; - node.sample[0].flow_match_sample_offset_mode = MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* Final IPv6 address. */ + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + node.sample[i].flow_match_sample_en = 1; + node.sample[i].flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* First come first serve no matter inner or outer. */ + node.sample[i].flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.sample[i].flow_match_sample_field_base_offset = + (i + 1) * sizeof(uint32_t); /* in bytes */ + } + node.sample[0].flow_match_sample_field_base_offset = 0; node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; node.out[0].compare_condition_value = IPPROTO_TCP; node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; @@ -1119,8 +1128,8 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) goto error; } priv->sh->srh_flex_parser.flex.devx_fp->devx_obj = fp; - priv->sh->srh_flex_parser.flex.mapnum = 1; - priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 1; + priv->sh->srh_flex_parser.flex.mapnum = MLX5_SRV6_SAMPLE_NUM; + priv->sh->srh_flex_parser.flex.devx_fp->num_samples = MLX5_SRV6_SAMPLE_NUM; ret = mlx5_devx_cmd_query_parse_samples(fp, ids, priv->sh->srh_flex_parser.flex.mapnum, &priv->sh->srh_flex_parser.flex.devx_fp->anchor_id); @@ -1128,12 +1137,22 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) DRV_LOG(ERR, "Failed to query sample IDs."); goto error; } - ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[0], - &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[0]); - if (ret) { - DRV_LOG(ERR, "Failed to query sample id information."); - goto error; + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[i], + &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[i]); + if (ret) { + DRV_LOG(ERR, "Failed to query sample id %u information.", ids[i]); + goto error; + } + } + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + priv->sh->srh_flex_parser.flex.devx_fp->sample_ids[i] = ids[i]; + priv->sh->srh_flex_parser.flex.map[i].width = sizeof(uint32_t) * CHAR_BIT; + priv->sh->srh_flex_parser.flex.map[i].reg_id = i; + priv->sh->srh_flex_parser.flex.map[i].shift = + (i + 1) * sizeof(uint32_t) * CHAR_BIT; } + priv->sh->srh_flex_parser.flex.map[0].shift = 0; return 0; error: if (fp) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 8f82aff0a5..635dd73674 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1356,6 +1356,7 @@ struct mlx5_flex_pattern_field { uint16_t shift:5; uint16_t reg_id:5; }; + #define MLX5_INVALID_SAMPLE_REG_ID 0x1F /* Port flex item context. */ @@ -1367,6 +1368,11 @@ struct mlx5_flex_item { struct mlx5_flex_pattern_field map[MLX5_FLEX_ITEM_MAPPING_NUM]; }; +/* + * Sample an IPv6 address and the first dword of SRv6 header. + * Then it is 16 + 4 = 20 bytes which is 5 dwords. + */ +#define MLX5_SRV6_SAMPLE_NUM 5 /* Mlx5 internal flex parser profile structure. */ struct mlx5_internal_flex_parser_profile { uint32_t refcnt; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 08/13] net/mlx5/hws: fix potential wrong rte_errno value 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (6 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 07/13] net/mlx5: sample the srv6 last segment Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 09/13] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu ` (5 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: hamdani, Alex Vesker A valid rte_errno is desired when DR layer api returns error and it can't over-write the value set by under-layer. Fixes: 0a2657c4ff4d ("net/mlx5/hws: support insert header action") Cc: hamdani@nvidia.com Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 98bb556b7b..e66a8135dc 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -2015,6 +2015,7 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, if (!num_of_hdrs) { DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero"); + rte_errno = EINVAL; return NULL; } @@ -2062,7 +2063,6 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, reformat_hdrs, log_bulk_size); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); - rte_errno = EINVAL; goto free_reformat_hdrs; } -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 09/13] net/mlx5/hws: add IPv6 routing extension push remove actions 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (7 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 08/13] net/mlx5/hws: fix potential wrong rte_errno value Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 10/13] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu ` (4 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Alex Vesker Add two dr_actions to implement IPv6 routing extension push and remove, the new actions are multiple actions combination instead of new types. Basically, there are two modify headers plus one reformat action. Action order is the same as encap and decap actions. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 29 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 358 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 7 + drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + drivers/net/mlx5/mlx5_flow.h | 44 ++++ 6 files changed, 438 insertions(+), 3 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 15dbf1a0cb..9e22dce6da 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3564,6 +3564,7 @@ enum mlx5_ifc_header_anchors { MLX5_HEADER_ANCHOR_PACKET_START = 0x0, MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_TCP_UDP = 0x09, MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, }; diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index e21dcd72ae..d88f73ab57 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -49,6 +49,8 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_REMOVE_HEADER, MLX5DR_ACTION_TYP_DEST_ROOT, MLX5DR_ACTION_TYP_DEST_ARRAY, + MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, + MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, MLX5DR_ACTION_TYP_MAX, }; @@ -242,6 +244,11 @@ struct mlx5dr_rule_action { uint8_t *data; } reformat; + struct { + uint32_t offset; + uint8_t *header; + } ipv6_ext; + struct { rte_be32_t vlan_hdr; } push_vlan; @@ -767,6 +774,28 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, struct mlx5dr_action_remove_header_attr *attr, uint32_t flags); +/* Create action to push or remove IPv6 extension header. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] type + * Type of direct rule action: MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT or + * MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT. + * @param[in] hdr + * Header for packet reformat. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags); + /* Destroy direct rule action. * * @param[in] action diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index e66a8135dc..16b7ee3436 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -22,7 +22,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_TAG), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -32,6 +33,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -52,6 +54,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -63,7 +66,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ [MLX5DR_TABLE_TYPE_FDB] = { BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -73,6 +77,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -1570,7 +1575,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && (log_bulk_size || num_of_hdrs > 1))) { - DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)", flags); + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: 0x%x)", flags); rte_errno = EINVAL; goto free_action; } @@ -2135,6 +2140,347 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, return NULL; } +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[3] = {0}; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Next_hdr will be copied to ipv6.protocol after pop done. + */ + MLX5_SET(copy_action_in, &cmd[0], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[0], length, 8); + MLX5_SET(copy_action_in, &cmd[0], src_offset, 24); + MLX5_SET(copy_action_in, &cmd[0], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[0], dst_field, mod_id); + + /* Add nop between the continuous same modify field id */ + MLX5_SET(copy_action_in, &cmd[1], action_type, MLX5_MODIFICATION_TYPE_NOP); + + /* Clear next_hdr for right checksum */ + MLX5_SET(set_action_in, &cmd[2], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[2], length, 8); + MLX5_SET(set_action_in, &cmd[2], offset, 24); + MLX5_SET(set_action_in, &cmd[2], field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Copy ipv6_route_ext[first_segment].dst_addr by flex parser to ipv6.dst_addr */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, i + 1); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + MLX5_SET(copy_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[i], dst_field, field[i]); + MLX5_SET(copy_action_in, &cmd[i], src_field, mod_id); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Restore next_hdr from seg_left for flex parser identifying */ + MLX5_SET(copy_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[4], length, 8); + MLX5_SET(copy_action_in, &cmd[4], dst_offset, 24); + MLX5_SET(copy_action_in, &cmd[4], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[4], dst_field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Copy ipv6_route_ext.next_hdr to ipv6.protocol */ + MLX5_SET(copy_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, cmd, length, 8); + MLX5_SET(copy_action_in, cmd, src_offset, 24); + MLX5_SET(copy_action_in, cmd, src_field, mod_id); + MLX5_SET(copy_action_in, cmd, dst_field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static int +mlx5dr_action_create_pop_ipv6_route_ext(struct mlx5dr_action *action) +{ + uint8_t anchor_id = flow_hw_get_ipv6_route_ext_anchor_from_ctx(action->ctx); + struct mlx5dr_action_remove_header_attr hdr_attr; + uint32_t i; + + if (!anchor_id) { + rte_errno = EINVAL; + return rte_errno; + } + + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(action); + + hdr_attr.by_anchor.decap = 1; + hdr_attr.by_anchor.start_anchor = anchor_id; + hdr_attr.by_anchor.end_anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + hdr_attr.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER; + action->ipv6_route_ext.action[3] = + mlx5dr_action_create_remove_header(action->ctx, &hdr_attr, action->flags); + + if (!action->ipv6_route_ext.action[0] || !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2] || !action->ipv6_route_ext.action[3]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext pop subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + + /* Set ipv6.protocol to IPPROTO_ROUTING */ + MLX5_SET(set_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, cmd, length, 8); + MLX5_SET(set_action_in, cmd, field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + MLX5_SET(set_action_in, cmd, data, IPPROTO_ROUTING); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, 0, + action->flags | MLX5DR_ACTION_FLAG_SHARED); +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr2(struct mlx5dr_action *action, + uint32_t bulk_size, + uint8_t *data) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + uint8_t seg_left, next_hdr; + uint32_t *ipv6_dst_addr; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + } + + /* Copy IPv6 destination address from ipv6_route_ext.last_segment */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[i], field, field[i]); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + MLX5_SET(set_action_in, &cmd[i], data, be32toh(*ipv6_dst_addr++)); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Set ipv6_route_ext.next_hdr since initially pushed as 0 for right checksum */ + MLX5_SET(set_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[4], length, 8); + MLX5_SET(set_action_in, &cmd[4], offset, 24); + MLX5_SET(set_action_in, &cmd[4], field, mod_id); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + next_hdr = MLX5_GET(header_ipv6_routing_ext, data, next_hdr); + MLX5_SET(set_action_in, &cmd[4], data, next_hdr); + } + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + bulk_size, action->flags); +} + +static int +mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, + struct mlx5dr_action_reformat_header *hdr, + uint32_t bulk_size) +{ + struct mlx5dr_action_insert_header insert_hdr = { {0} }; + uint8_t header[MLX5_PUSH_MAX_LEN]; + uint32_t i; + + if (!hdr || !hdr->sz || hdr->sz > MLX5_PUSH_MAX_LEN || + ((action->flags & MLX5DR_ACTION_FLAG_SHARED) && !hdr->data)) { + DR_LOG(ERR, "Invalid ipv6_route_ext header"); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + memcpy(header, hdr->data, hdr->sz); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + } + + insert_hdr.anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + insert_hdr.encap = 1; + insert_hdr.hdr.sz = hdr->sz; + insert_hdr.hdr.data = header; + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, + bulk_size, action->flags); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr2(action, bulk_size, hdr->data); + + if (!action->ipv6_route_ext.action[0] || + !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext push subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 extension actions is not supported"); + rte_errno = ENOTSUP; + return NULL; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) { + rte_errno = ENOMEM; + return NULL; + } + + switch (action_type) { + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) { + DR_LOG(ERR, "Pop ipv6_route_ext must be shared"); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_pop_ipv6_route_ext(action); + break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); + break; + default: + DR_LOG(ERR, "Unsupported action type %d\n", action_type); + rte_errno = ENOTSUP; + goto free_action; + } + + if (ret) { + DR_LOG(ERR, "Failed to create IPv6 extension reformat action"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) { struct mlx5dr_devx_obj *obj = NULL; @@ -2203,6 +2549,12 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(&action[i]); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + break; } } diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 7e5063b57e..5f6eadb76f 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -8,6 +8,9 @@ /* Max number of STEs needed for a rule (including match) */ #define MLX5DR_ACTION_MAX_STE 10 +/* Max number of internal subactions of ipv6_ext */ +#define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 + enum mlx5dr_action_stc_idx { MLX5DR_ACTION_STC_IDX_CTRL = 0, MLX5DR_ACTION_STC_IDX_HIT = 1, @@ -138,6 +141,10 @@ struct mlx5dr_action { uint8_t offset; bool encap; } reformat; + struct { + struct mlx5dr_action + *action[MLX5DR_ACTION_IPV6_EXT_MAX_SA]; + } ipv6_route_ext; struct { struct mlx5dr_devx_obj *devx_obj; uint8_t return_reg_id; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 6607daaa63..11557bcab8 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -26,6 +26,8 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_DEST_ARRAY] = "DEST_ARRAY", [MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER", [MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER", + [MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT", + [MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a1cd95801b..32388670dd 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -595,6 +595,7 @@ struct mlx5_flow_dv_matcher { struct mlx5_flow_dv_match_params mask; /**< Matcher mask. */ }; +#define MLX5_PUSH_MAX_LEN 128 #define MLX5_ENCAP_MAX_LEN 132 /* Encap/decap resource structure. */ @@ -2898,6 +2899,49 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused) #endif return UINT32_MAX; } + +static __rte_always_inline uint8_t +flow_hw_get_ipv6_route_ext_anchor_from_ctx(void *dr_ctx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + struct mlx5_priv *priv; + + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) + return priv->sh->srh_flex_parser.flex.devx_fp->anchor_id; + } +#else + RTE_SET_USED(dr_ctx); +#endif + return 0; +} + +static __rte_always_inline uint16_t +flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + struct mlx5_priv *priv; + struct mlx5_flex_parser_devx *fp; + + if (idx >= MLX5_GRAPH_NODE_SAMPLE_NUM || idx >= MLX5_SRV6_SAMPLE_NUM) + return 0; + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) { + fp = priv->sh->srh_flex_parser.flex.devx_fp; + return fp->sample_info[idx].modify_field_id; + } + } +#else + RTE_SET_USED(dr_ctx); + RTE_SET_USED(idx); +#endif + return 0; +} + void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev); void -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 10/13] net/mlx5/hws: add setter for IPv6 routing push remove 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (8 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 09/13] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 11/13] net/mlx5: implement " Rongwei Liu ` (3 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Alex Vesker The rte action will be translated to multiple dr_actions which need different setters to program them. In order to leverage the existing setter logic, there is a new callback introduce which called fetch_opt with unique parameter. For each setter, it may have different reparsing properties. Setter which requires no reparse can't share the same one with the one has reparse enabled even if there is spare space. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 174 +++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 3 +- 2 files changed, 176 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 16b7ee3436..719d546424 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -3007,6 +3007,121 @@ mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply, MLX5DR_CONTEXT_SHARED_STC_DECAP_L3)); } +static void +mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(uint8_t *data, void *mh_data) +{ + uint8_t *action_ptr = mh_data; + uint32_t *ipv6_dst_addr; + uint8_t seg_left; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list which is the next hop */ + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + + /* Load next hop IPv6 address in reverse order to ipv6.dst_address */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, action_ptr, data, be32toh(*ipv6_dst_addr++)); + action_ptr += MLX5DR_MODIFY_ACTION_SIZE; + } + + /* Set ipv6_route_ext.next_hdr per user input */ + MLX5_SET(set_action_in, action_ptr, data, *data); +} + +static void +mlx5dr_action_setter_ipv6_route_ext_mhdr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + __be64 cmd[MLX5_SRV6_SAMPLE_NUM] = {0}; + struct mlx5dr_action *ipv6_ext_action; + uint8_t *header; + + header = rule_action[setter->idx_double].ipv6_ext.header; + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.modify_header.offset = 0; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.data = NULL; + } else { + /* + * Copy ipv6_dst from ipv6_route_ext.last_seg. + * Set ipv6_route_ext.next_hdr. + */ + mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(header, cmd); + tmp_rule_action.modify_header.data = (uint8_t *)cmd; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_modify_header(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + struct mlx5dr_action *ipv6_ext_action; + uint8_t header[MLX5_PUSH_MAX_LEN]; + + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.reformat.offset = 0; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.data = NULL; + } else { + memcpy(header, rule_action[setter->idx_double].ipv6_ext.header, + tmp_rule_action.action->reformat.header_size); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + tmp_rule_action.reformat.data = header; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_insert_ptr(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_pop(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = &apply->rule_action[setter->idx_single]; + uint8_t idx = MLX5DR_ACTION_IPV6_EXT_MAX_SA - 1; + struct mlx5dr_action *action; + + /* Pop the ipv6_route_ext as set_single logic */ + action = rule_action->action->ipv6_route_ext.action[idx]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(action->stc[apply->tbl_type].offset); +} + int mlx5dr_action_template_process(struct mlx5dr_action_template *at) { struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; @@ -3070,6 +3185,65 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_double = i; break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Set ipv6_route_ext.next_hdr to 0 for checksum bug. + */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* + * Restore ipv6_route_ext.next_hdr from ipv6_route_ext.seg_left. + * Load the final destination address from flex parser sample 1->4. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* Set the ipv6.protocol per ipv6_route_ext.next_hdr */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + /* Pop ipv6_route_ext */ + setter->flags |= ASF_SINGLE1 | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_ipv6_route_ext_pop; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + /* Insert ipv6_route_ext with next_hdr as 0 due to checksum bug */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_INSERT; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_insert_ptr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* Set ipv6.protocol as IPPROTO_ROUTING: 0x2b */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* + * Load the right ipv6_route_ext.next_hdr per user input buffer. + * Load the next dest_addr from the ipv6_route_ext.seg_list[last]. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + break; + case MLX5DR_ACTION_TYP_MODIFY_HDR: /* Double modify header list */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 5f6eadb76f..c12d4308c7 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -6,7 +6,7 @@ #define MLX5DR_ACTION_H_ /* Max number of STEs needed for a rule (including match) */ -#define MLX5DR_ACTION_MAX_STE 10 +#define MLX5DR_ACTION_MAX_STE 20 /* Max number of internal subactions of ipv6_ext */ #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 @@ -104,6 +104,7 @@ struct mlx5dr_actions_wqe_setter { uint8_t idx_ctr; uint8_t idx_hit; uint8_t flags; + uint8_t extra_data; }; struct mlx5dr_action_template { -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 11/13] net/mlx5: implement IPv6 routing push remove 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (9 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 10/13] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 12/13] net/mlx5/hws: fix srv6 push compilation failure Rongwei Liu ` (2 subsequent siblings) 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Remove actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 + doc/guides/rel_notes/release_23_11.rst | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 283 ++++++++++++++++++++++++- 6 files changed, 311 insertions(+), 9 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 0ed9a6aefc..0739fe9d63 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -108,6 +108,8 @@ flag = Y inc_tcp_ack = Y inc_tcp_seq = Y indirect_list = Y +ipv6_ext_push = Y +ipv6_ext_remove = Y jump = Y mark = Y meter = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index a67f1e924f..9fb545af19 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -148,6 +148,7 @@ Features - Matching on GTP extension header with raw encap/decap action. - Matching on Geneve TLV option header with raw encap/decap action. - Matching on ESP header SPI field. +- Matching on flex item with specific pattern. - Matching on InfiniBand BTH. - Modify IPv4/IPv6 ECN field. - RSS support in sample action. @@ -166,6 +167,8 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. +- Modify flex item field. +- Push or remove IPv6 routing extension. Limitations @@ -759,6 +762,14 @@ Limitations to the representor of the source virtual port (SF/VF), while if it is disabled, the traffic will be routed based on the steering rules in the ingress domain. +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + - Not supported on guest port. + Statistics ---------- diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index f337db19f0..849ea7e4b8 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -158,6 +158,8 @@ New Features * Added support for ``RTE_FLOW_ITEM_TYPE_PTYPE`` flow item. * Added support for ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` flow action and mirror. * Added support for Multiport E-Switch. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` flow action. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE`` flow action. * **Updated Solarflare net driver.** diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 635dd73674..6d77397288 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -394,6 +394,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 32388670dd..f5357ca667 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -363,6 +363,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) #define MLX5_FLOW_ACTION_QUOTA (1ull << 46) #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1269,6 +1271,8 @@ typedef int const struct rte_flow_action *, struct mlx5dr_rule_action *); +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1315,6 +1319,10 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 extension push data len. */ + uint16_t len; + } ipv6_ext; struct { uint32_t id; uint32_t conf_masked:1; @@ -1359,6 +1367,7 @@ struct rte_flow_actions_template { uint16_t *src_off; /* RTE action displacement from app. template */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */ uint32_t refcnt; /* Reference counter. */ uint8_t flex_item; /* flex item index. */ }; @@ -1384,7 +1393,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push remove action struct. */ +struct mlx5_hw_push_remove_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_remove action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1415,6 +1431,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/remove action. */ + struct mlx5_hw_push_remove_action *push_remove; + uint16_t push_remove_pos; /* Push/remove action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 9bffc79291..7376030da2 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_remove) { + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } if (acts->mhdr) { flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); @@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len) +{ + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->ipv6_ext.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1924,6 +1968,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, return 0; } + +static int +mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + uint8_t *push_data, uint8_t *push_data_m, + size_t push_size, uint16_t recom_src, + enum mlx5dr_action_type recom_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + struct mlx5_action_construct_data *act_data; + struct mlx5dr_action_reformat_header hdr = {0}; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_remove) + push_size, + 0, SOCKET_ID_ANY); + if (!acts->push_remove) + return -ENOMEM; + + switch (recom_type) { + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!push_data || !push_size) + goto err1; + if (!push_data_m) { + bulk = rte_log2_u32(table_attr->nb_flows); + } else { + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + } + acts->push_remove->data_size = push_size; + memcpy(acts->push_remove->data, push_data, push_size); + hdr.data = push_data; + hdr.sz = push_size; + break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + break; + default: + break; + } + + acts->push_remove->action = + mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx, + recom_type, &hdr, bulk, flag); + if (!acts->push_remove->action) + goto err1; + acts->rule_acts[at->recom_off].action = acts->push_remove->action; + acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data; + acts->rule_acts[at->recom_off].ipv6_ext.offset = 0; + acts->push_remove_pos = at->recom_off; + if (!acts->push_remove->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size); + if (!act_data) + goto err; + } + return 0; +err: + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); +err1: + if (acts->push_remove) { + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } + return -EINVAL; +} + /** * Translate rte_flow actions to DR action. * @@ -1957,19 +2077,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *masks = at->masks; enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t jump_pos; uint32_t ct_idx; @@ -2175,6 +2300,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + if (ipv6_ext_data) { + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + } + recom_src = src_pos; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); @@ -2322,6 +2477,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, if (ret) goto err; } + if (recom_used) { + MLX5_ASSERT(at->recom_off != UINT16_MAX); + ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data, + push_data_m, push_size, recom_src, + recom_type); + if (ret) + goto err; + } return 0; err: err = rte_errno; @@ -2719,11 +2882,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2854,6 +3019,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, + act_data->ipv6_ext.len); + MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -3010,6 +3182,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->res_idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_remove && !hw_acts->push_remove->shared) { + rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = + job->flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -5113,6 +5290,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -5340,6 +5549,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, #endif uint16_t i; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -5436,6 +5646,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -5551,6 +5776,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, [RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, }; static inline void @@ -5648,6 +5875,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -5656,7 +5885,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -5665,8 +5895,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -5698,6 +5931,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -5770,11 +6013,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for remove anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT || + recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -6155,6 +6412,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, at->dr_off[i] = UINT16_MAX; at->reformat_off = UINT16_MAX; at->mhdr_off = UINT16_MAX; + at->recom_off = UINT16_MAX; for (i = 0; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++, masks++, i++) { const struct rte_flow_action_modify_field *info; @@ -6183,7 +6441,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, break; } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -6220,6 +6478,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error __rte_unused) { + uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE | + MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Action template %p is still in use.", (void *)template); @@ -6228,6 +6489,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->action_flags & flag) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -8796,6 +9059,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -8811,7 +9075,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -8831,13 +9095,16 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 12/13] net/mlx5/hws: fix srv6 push compilation failure 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (10 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 11/13] net/mlx5: implement " Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 13/13] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu 2023-11-02 13:44 ` [PATCH v4 00/13] support IPv6 push remove action Raslan Darawsheh 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Alex Vesker OVS team reports PMD compilation failed under: gcc (Ubuntu 13.2.0-4ubuntu3) 13.2.0 In function ‘mlx5dr_action_create_push_ipv6_route_ext_mhdr2’, ../drivers/net/mlx5/hws/mlx5dr_action.c:2701:64: ‘ipv6_dst_addr’ may be used uninitialized Only shared action needs to query the IPv6 destination address when creating dr_action. This is a false alert. Initialize it to NULL to fix the warning. Fixes: a50f6d3fe58d ("net/mlx5/hws: add IPv6 routing extension push remove actions") Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 719d546424..43a65bdfd1 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -2326,8 +2326,8 @@ mlx5dr_action_create_push_ipv6_route_ext_mhdr2(struct mlx5dr_action *action, MLX5_MODI_OUT_DIPV6_31_0 }; struct mlx5dr_action_mh_pattern pattern; + uint32_t *ipv6_dst_addr = NULL; uint8_t seg_left, next_hdr; - uint32_t *ipv6_dst_addr; __be64 cmd[5] = {0}; uint16_t mod_id; uint32_t i; -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v4 13/13] net/mlx5/hws: add stc reparse support for srv6 push pop 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (11 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 12/13] net/mlx5/hws: fix srv6 push compilation failure Rongwei Liu @ 2023-11-01 4:44 ` Rongwei Liu 2023-11-02 13:44 ` [PATCH v4 00/13] support IPv6 push remove action Raslan Darawsheh 13 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-11-01 4:44 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, suanmingm, thomas; +Cc: Erez Shitrit After pushing/popping srv6 into/from IPv6 packets, the checksum needs to be correct. In order to achieve this, there is a need to control each STE' reparse behavior(CX7 and above). Add two more flags enumeration definitions to allow external control of reparse property in stc. 1. Push a. 1st STE, insert header action, reparse ignored(default reparse always) b. 2nd STE, modify IPv6 protocol, reparse always as default. c. 3rd STE, modify header list, reparse always(default reparse ignored) 2. Pop a. 1st STE, modify header list, reparse always(default reparse ignored) b. 2nd STE, modify header list, reparse always(default reparse ignored) c. 3rd STE, modify IPv6 protocol, reparse ignored(default reparse always); remove header action, reparse always as default. For CX6Lx and CX6Dx, the reparse behavior is controlled by RTC as always. Only pop action can work well. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 115 +++++++++++++++++++-------- drivers/net/mlx5/hws/mlx5dr_action.h | 7 ++ 2 files changed, 87 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 43a65bdfd1..862ee3e332 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -552,6 +552,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; if (action->modify_header.require_reparse) attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; @@ -590,9 +591,12 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: case MLX5DR_ACTION_TYP_INSERT_HEADER: + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; + if (!action->reformat.require_reparse) + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; - attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; attr->insert_header.encap = action->reformat.encap; attr->insert_header.insert_anchor = action->reformat.anchor; attr->insert_header.arg_id = action->reformat.arg_obj->id; @@ -1301,7 +1305,7 @@ static int mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, uint8_t num_of_hdrs, struct mlx5dr_action_reformat_header *hdrs, - uint32_t log_bulk_sz) + uint32_t log_bulk_sz, uint32_t reparse) { struct mlx5dr_devx_obj *arg_obj; size_t max_sz = 0; @@ -1338,6 +1342,11 @@ mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, action[i].reformat.encap = 1; } + if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT)) + action[i].reformat.require_reparse = true; + else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON) + action[i].reformat.require_reparse = true; + ret = mlx5dr_action_create_stcs(&action[i], NULL); if (ret) { DR_LOG(ERR, "Failed to create stc for reformat"); @@ -1374,7 +1383,8 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action, ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, - log_bulk_sz); + log_bulk_sz, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); if (ret) goto put_shared_stc; @@ -1517,7 +1527,8 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_action *action, ret = mlx5dr_action_create_stcs(action, NULL); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: - ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size); + ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: ret = mlx5dr_action_handle_l2_to_tunnel_l3(action, num_of_hdrs, hdrs, bulk_size); @@ -1625,7 +1636,8 @@ static int mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, uint8_t num_of_patterns, struct mlx5dr_action_mh_pattern *pattern, - uint32_t log_bulk_size) + uint32_t log_bulk_size, + uint32_t reparse) { struct mlx5dr_devx_obj *pat_obj, *arg_obj = NULL; struct mlx5dr_context *ctx = action->ctx; @@ -1659,8 +1671,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, action[i].modify_header.num_of_patterns = num_of_patterns; action[i].modify_header.max_num_of_actions = max_mh_actions; action[i].modify_header.num_of_actions = num_actions; - action[i].modify_header.require_reparse = - mlx5dr_pat_require_reparse(pattern[i].data, num_actions); + + if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT)) + action[i].modify_header.require_reparse = + mlx5dr_pat_require_reparse(pattern[i].data, num_actions); + else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON) + action[i].modify_header.require_reparse = true; if (num_actions == 1) { pat_obj = NULL; @@ -1703,12 +1719,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, return rte_errno; } -struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - uint8_t num_of_patterns, - struct mlx5dr_action_mh_pattern *patterns, - uint32_t log_bulk_size, - uint32_t flags) +static struct mlx5dr_action * +mlx5dr_action_create_modify_header_reparse(struct mlx5dr_context *ctx, + uint8_t num_of_patterns, + struct mlx5dr_action_mh_pattern *patterns, + uint32_t log_bulk_size, + uint32_t flags, uint32_t reparse) { struct mlx5dr_action *action; int ret; @@ -1756,7 +1772,8 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_modify_header_hws(action, num_of_patterns, patterns, - log_bulk_size); + log_bulk_size, + reparse); if (ret) goto free_action; @@ -1767,6 +1784,17 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + uint8_t num_of_patterns, + struct mlx5dr_action_mh_pattern *patterns, + uint32_t log_bulk_size, + uint32_t flags) +{ + return mlx5dr_action_create_modify_header_reparse(ctx, num_of_patterns, patterns, + log_bulk_size, flags, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); +} static struct mlx5dr_devx_obj * mlx5dr_action_dest_array_process_reformat(struct mlx5dr_context *ctx, enum mlx5dr_action_type type, @@ -2007,12 +2035,12 @@ mlx5dr_action_create_dest_root(struct mlx5dr_context *ctx, return NULL; } -struct mlx5dr_action * -mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, - uint8_t num_of_hdrs, - struct mlx5dr_action_insert_header *hdrs, - uint32_t log_bulk_size, - uint32_t flags) +static struct mlx5dr_action * +mlx5dr_action_create_insert_header_reparse(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags, uint32_t reparse) { struct mlx5dr_action_reformat_header *reformat_hdrs; struct mlx5dr_action *action; @@ -2065,7 +2093,8 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, } ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, - reformat_hdrs, log_bulk_size); + reformat_hdrs, log_bulk_size, + reparse); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); goto free_reformat_hdrs; @@ -2082,6 +2111,18 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags) +{ + return mlx5dr_action_create_insert_header_reparse(ctx, num_of_hdrs, hdrs, + log_bulk_size, flags, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); +} + struct mlx5dr_action * mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, struct mlx5dr_action_remove_header_attr *attr, @@ -2175,8 +2216,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) pattern.data = cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_ON); } static void * @@ -2222,8 +2264,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) pattern.data = cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_ON); } static void * @@ -2249,8 +2292,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) pattern.data = (__be64 *)cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_OFF); } static int @@ -2397,8 +2441,9 @@ mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, insert_hdr.hdr.sz = hdr->sz; insert_hdr.hdr.data = header; action->ipv6_route_ext.action[0] = - mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, - bulk_size, action->flags); + mlx5dr_action_create_insert_header_reparse(action->ctx, 1, &insert_hdr, + bulk_size, action->flags, + MLX5DR_ACTION_STC_REPARSE_OFF); action->ipv6_route_ext.action[1] = mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); action->ipv6_route_ext.action[2] = @@ -2431,12 +2476,6 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, struct mlx5dr_action *action; int ret; - if (mlx5dr_context_cap_dynamic_reparse(ctx)) { - DR_LOG(ERR, "IPv6 extension actions is not supported"); - rte_errno = ENOTSUP; - return NULL; - } - if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); @@ -2461,6 +2500,12 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_pop_ipv6_route_ext(action); break; case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 routing extension push actions is not supported"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); break; default: diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index c12d4308c7..fad35a845b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -62,6 +62,12 @@ enum mlx5dr_action_setter_flag { ASF_HIT = 1 << 7, }; +enum mlx5dr_action_stc_reparse { + MLX5DR_ACTION_STC_REPARSE_DEFAULT, + MLX5DR_ACTION_STC_REPARSE_ON, + MLX5DR_ACTION_STC_REPARSE_OFF, +}; + struct mlx5dr_action_default_stc { struct mlx5dr_pool_chunk nop_ctr; struct mlx5dr_pool_chunk nop_dw5; @@ -141,6 +147,7 @@ struct mlx5dr_action { uint8_t anchor; uint8_t offset; bool encap; + uint8_t require_reparse; } reformat; struct { struct mlx5dr_action -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* RE: [PATCH v4 00/13] support IPv6 push remove action 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu ` (12 preceding siblings ...) 2023-11-01 4:44 ` [PATCH v4 13/13] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu @ 2023-11-02 13:44 ` Raslan Darawsheh 13 siblings, 0 replies; 64+ messages in thread From: Raslan Darawsheh @ 2023-11-02 13:44 UTC (permalink / raw) To: Rongwei Liu, dev, Matan Azrad, Slava Ovsiienko, Ori Kam, Suanming Mou, NBU-Contact-Thomas Monjalon (EXTERNAL) Hi, > -----Original Message----- > From: Rongwei Liu <rongweil@nvidia.com> > Sent: Wednesday, November 1, 2023 6:44 AM > To: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou > <suanmingm@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net> > Subject: [PATCH v4 00/13] support IPv6 push remove action > > Support IPv6 extension push/remove actions in MLX5 PMD. > Routing extension is the only supported type. > > v4: add more dependancies. > v3: rebase. > v2: add reparse control and rebase. > > Alex Vesker (4): > net/mlx5/hws: allow jump to TIR over FDB > net/mlx5/hws: support dynamic re-parse > net/mlx5/hws: dynamic re-parse for modify header > net/mlx5/hws: fix incorrect re-parse on complex rules > > Hamdan Igbaria (2): > net/mlx5/hws: support insert header action > net/mlx5/hws: support remove header action > > Rongwei Liu (7): > net/mlx5: sample the srv6 last segment > net/mlx5/hws: fix potential wrong rte_errno value > net/mlx5/hws: add IPv6 routing extension push remove actions > net/mlx5/hws: add setter for IPv6 routing push remove > net/mlx5: implement IPv6 routing push remove > net/mlx5/hws: fix srv6 push compilation failure > net/mlx5/hws: add stc reparse support for srv6 push pop > > doc/guides/nics/features/mlx5.ini | 2 + > doc/guides/nics/mlx5.rst | 11 + > doc/guides/rel_notes/release_23_11.rst | 2 + > drivers/common/mlx5/mlx5_prm.h | 13 +- > drivers/net/mlx5/hws/mlx5dr.h | 105 +++ > drivers/net/mlx5/hws/mlx5dr_action.c | 873 > +++++++++++++++++++++++-- > drivers/net/mlx5/hws/mlx5dr_action.h | 32 +- > drivers/net/mlx5/hws/mlx5dr_cmd.c | 11 +- > drivers/net/mlx5/hws/mlx5dr_cmd.h | 3 + > drivers/net/mlx5/hws/mlx5dr_context.c | 15 + > drivers/net/mlx5/hws/mlx5dr_context.h | 9 +- > drivers/net/mlx5/hws/mlx5dr_debug.c | 4 + > drivers/net/mlx5/hws/mlx5dr_internal.h | 1 + > drivers/net/mlx5/hws/mlx5dr_matcher.c | 2 + > drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 41 +- > drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 2 + > drivers/net/mlx5/mlx5.c | 41 +- > drivers/net/mlx5/mlx5.h | 7 + > drivers/net/mlx5/mlx5_flow.h | 65 +- > drivers/net/mlx5/mlx5_flow_hw.c | 283 +++++++- > 20 files changed, 1438 insertions(+), 84 deletions(-) > > -- > 2.27.0 squashed some commits to their fixes, removed irrelevant content from commit logs, 1aligned all commits to use push/pop instead of push remove. Series applied to next-net-mlx, Kindest regards, Raslan Darawsheh ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 5/8] net/mlx5: generate srv6 modify header resource 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu ` (3 preceding siblings ...) 2023-04-17 9:25 ` [PATCH v1 4/8] net/mlx5: sample the srv6 last segment Rongwei Liu @ 2023-04-17 9:25 ` Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 6/8] net/mlx5/hws: add IPv6 routing extension push pop actions Rongwei Liu ` (2 subsequent siblings) 7 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas Both checksum and IPv6 next_hdr needs to be updated when adding or removing srv6 header into/from IPv6 packets. 1. Add srv6 ste1 (push buffer with next_hdr 0) --> ste2 (IPv6 next_hdr to 0x2b) --> ste3 (load next hop IPv6 address, and srv6 next_hdr restore) 2. Remove srv6 ste1 (set srv6 next_hdr 0 and save original) --> ste2 (load final IPv6 destination, restore srv6 next_hdr) --> ste3 (remove srv6 and copy srv6 next_hdr to ipv6 next_hdr) Add helpers to generate the 2 modify header resources for add/remove actions. Remove srv6 should be shared globally and add srv6 can be shared or unique per each flow rules. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- drivers/net/mlx5/mlx5.h | 29 +++ drivers/net/mlx5/mlx5_flow_dv.c | 386 ++++++++++++++++++++++++++++++++ 2 files changed, 415 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3fbec4db9e..2cb6364957 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2314,4 +2314,33 @@ void mlx5_flex_parser_clone_free_cb(void *tool_ctx, int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev); void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev); + +int +flow_dv_generate_ipv6_routing_pop_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_pop_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_push_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_push_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num, uint8_t *buf); + +int +flow_dv_ipv6_routing_pop_mhdr_cmd(struct rte_eth_dev *dev, uint8_t *mh_data, + uint8_t *anchor_id); + #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f136f43b0a..4a1f61eeb7 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2128,6 +2128,392 @@ flow_dv_convert_action_modify_field field, dcopy, resource, type, error); } +/** + * Generate the 1st modify header data for IPv6 routing pop. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_pop_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + uint32_t value = 0; + struct rte_flow_error error; + +#define IPV6_ROUTING_POP_MHDR_NUM1 3 + if (cmd_num < IPV6_ROUTING_POP_MHDR_NUM1) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + /* save next_hdr to seg_left. */ + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, segments_left) * CHAR_BIT; + /* For COPY fill the destination field (dcopy) without mask. */ + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, attr, &error); + /* Then construct the source field (field) with mask. */ + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + item.mask = &mask; + resource = &dummy.resource; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate save srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 1); + /* add nop. */ + resource->actions[1].data0 = 0; + resource->actions[1].action_type = MLX5_MODIFICATION_TYPE_NOP; + resource->actions[1].data0 = RTE_BE32(resource->actions[1].data0); + resource->actions[1].data1 = 0; + resource->actions_num += 1; + /* clear srv6 next_hdr. */ + memset(&field, 0, sizeof(field)); + memset(&dcopy, 0, sizeof(dcopy)); + memset(&mask, 0, sizeof(mask)); + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + item.spec = (void *)(uintptr_t)&value; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate clear srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_POP_MHDR_NUM1); +#undef IPV6_ROUTING_POP_MHDR_NUM1 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing pop. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_pop_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + +#define IPV6_ROUTING_POP_MHDR_NUM2 5 + if (cmd_num < IPV6_ROUTING_POP_MHDR_NUM2) { + DRV_LOG(ERR, "Note enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + item.mask = &mask; + data.field = RTE_FLOW_FIELD_IPV6_DST; + data.level = 0; + data.offset = 0; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 128, dev, attr, &error); + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.offset = 32; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 128, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate load final IPv6 address modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 4); + memset(&field, 0, sizeof(field)); + memset(&dcopy, 0, sizeof(dcopy)); + memset(&mask, 0, sizeof(mask)); + /* copy seg_left to srv6.next_hdr */ + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, attr, &error); + data.offset = offsetof(struct rte_ipv6_routing_ext, segments_left) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate restore srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_POP_MHDR_NUM2); +#undef IPV6_ROUTING_POP_MHDR_NUM2 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing push. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_push_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + uint8_t value; + +#define IPV6_ROUTING_PUSH_MHDR_NUM1 1 + if (cmd_num < IPV6_ROUTING_PUSH_MHDR_NUM1) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + /* Set IPv6 proto to 0x2b. */ + data.field = RTE_FLOW_FIELD_IPV6_PROTO; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + resource = &dummy.resource; + item.mask = &mask; + value = IPPROTO_ROUTING; + item.spec = (void *)(uintptr_t)&value; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate modify IPv6 protocol to 0x2b failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_PUSH_MHDR_NUM1); +#undef IPV6_ROUTING_PUSH_MHDR_NUM1 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing push. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_push_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num, uint8_t *buf) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + uint8_t next_hdr = *buf; + +#define IPV6_ROUTING_PUSH_MHDR_NUM2 5 + if (cmd_num < IPV6_ROUTING_PUSH_MHDR_NUM2) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + item.mask = &mask; + item.spec = buf + sizeof(struct rte_ipv6_routing_ext) + + (*(buf + 3) - 1) * 16; /* seg_left-1 IPv6 address */ + data.field = RTE_FLOW_FIELD_IPV6_DST; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 128, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate load srv6 next hop modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 4); + memset(&field, 0, sizeof(field)); + memset(&mask, 0, sizeof(mask)); + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + item.spec = (void *)(uintptr_t)&next_hdr; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate srv6 next header restore modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_PUSH_MHDR_NUM2); +#undef IPV6_ROUTING_PUSH_MHDR_NUM2 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate IPv6 routing pop modification_cmd. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in,out] mh_data + * Pointer to modify header data buffer. + * @param[in,out] anchor_id + * Anchor ID for REMOVE command. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_ipv6_routing_pop_mhdr_cmd(struct rte_eth_dev *dev, uint8_t *mh_data, + uint8_t *anchor_id) +{ + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv || !priv->sh->cdev->config.hca_attr.flex.parse_graph_anchor) { + DRV_LOG(ERR, "Doesn't support srv6 as reformat anchor"); + return -1; + } + /* Restore IPv6 protocol from flex parser. */ + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + data.field = RTE_FLOW_FIELD_IPV6_PROTO; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, NULL, &error); + /* Then construct the source field (field) with mask. */ + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, NULL, &error); + item.mask = &mask; + resource = &dummy.resource; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, + &error)) { + DRV_LOG(ERR, "Generate copy IPv6 protocol from srv6 next header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 1); + memcpy(mh_data, resource->actions, sizeof(struct mlx5_modification_cmd)); + *anchor_id = priv->sh->srh_flex_parser.flex.devx_fp->anchor_id; + return 1; +} + /** * Validate MARK item. * -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 6/8] net/mlx5/hws: add IPv6 routing extension push pop actions 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu ` (4 preceding siblings ...) 2023-04-17 9:25 ` [PATCH v1 5/8] net/mlx5: generate srv6 modify header resource Rongwei Liu @ 2023-04-17 9:25 ` Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 7/8] net/mlx5/hws: add setter for IPv6 routing push pop Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 8/8] net/mlx5: implement " Rongwei Liu 7 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas Add two dr_actions to implement IPv6 routing extension push and pop, the new actions are multiple actions combination instead of new types. Basically, there are two modify headers plus one reformat action. Action order is the same as encap and decap actions. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 41 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 380 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 5 + drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + 5 files changed, 428 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ed3d5efbb7..241485f905 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3438,6 +3438,7 @@ enum mlx5_ifc_header_anchors { MLX5_HEADER_ANCHOR_PACKET_START = 0x0, MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_TCP_UDP = 0x09, MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, }; diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 2b02884dc3..da058bdb4b 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -45,6 +45,8 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_PUSH_VLAN, MLX5DR_ACTION_TYP_ASO_METER, MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_IPV6_ROUTING_POP, + MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH, MLX5DR_ACTION_TYP_MAX, }; @@ -186,6 +188,12 @@ struct mlx5dr_rule_action { uint8_t *data; } reformat; + struct { + uint32_t offset; + uint8_t *data; + uint8_t *mhdr; + } recom; + struct { rte_be32_t vlan_hdr; } push_vlan; @@ -614,4 +622,37 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, */ int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); +/* Check if mlx5dr action template contain srv6 push or pop actions. + * + * @param[in] at + * The action template going to be parsed. + * @return true if containing srv6 push/pop action, false otherwise. + */ +bool +mlx5dr_action_template_contain_srv6(struct mlx5dr_action_template *at); + +/* Create multiple direct actions combination action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] type + * Type of direct rule action. + * @param[in] data_sz + * Size in bytes of data. + * @param[in] inline_data + * Header data array in case of inline action. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_recombination(struct mlx5dr_context *ctx, + enum mlx5dr_action_type type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags); + #endif diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 2d93be717f..fa38654644 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -19,6 +19,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ [MLX5DR_TABLE_TYPE_NIC_RX] = { BIT(MLX5DR_ACTION_TYP_TAG), BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_POP) | BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), @@ -29,6 +30,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -46,6 +48,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -54,6 +57,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ }, [MLX5DR_TABLE_TYPE_FDB] = { BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_POP) | BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), @@ -64,6 +68,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -227,6 +232,18 @@ static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); } +bool mlx5dr_action_template_contain_srv6(struct mlx5dr_action_template *at) +{ + int i = 0; + + for (i = 0; i < at->num_actions; i++) { + if (at->action_type_arr[i] == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP || + at->action_type_arr[i] == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) + return true; + } + return false; +} + static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) { DR_LOG(ERR, "Invalid action_type sequence"); @@ -501,6 +518,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->dest_tir_num = obj->id; break; case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; if (action->modify_header.num_of_actions == 1) { @@ -529,10 +547,14 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; break; case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; attr->insert_header.encap = 1; - attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + if (action->type == MLX5DR_ACTION_TYP_L2_TO_TNL_L2) + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + else + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_TCP_UDP; attr->insert_header.arg_id = action->reformat.arg_obj->id; attr->insert_header.header_size = action->reformat.header_size; break; @@ -1452,6 +1474,90 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, return ret; } +static int +mlx5dr_action_handle_ipv6_routing_pop(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + void *dev = flow_hw_get_dev_from_ctx(ctx); + int mh_data_size, ret; + uint8_t *srv6_data; + uint8_t anchor_id; + + if (dev == NULL) { + DR_LOG(ERR, "Invalid dev handle for IPv6 routing pop\n"); + return -1; + } + ret = flow_dv_ipv6_routing_pop_mhdr_cmd(dev, mh_data, &anchor_id); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate modify-header pattern for IPv6 routing pop\n"); + return -1; + } + srv6_data = mh_data + MLX5DR_MODIFY_ACTION_SIZE * ret; + /* Remove SRv6 headers */ + MLX5_SET(stc_ste_param_remove, srv6_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, srv6_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, srv6_data, remove_start_anchor, anchor_id); + MLX5_SET(stc_ste_param_remove, srv6_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_TCP_UDP); + mh_data_size = (ret + 1) * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *)mh_data, 0); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for IPv6 routing pop\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + mh_data, mh_data_size); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg IPv6 routing pop"); + goto clean_stc; + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int mlx5dr_action_handle_ipv6_routing_push(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create args for ipv6 routing push"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for ipv6 routing push"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + static int mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, size_t data_sz, @@ -1484,6 +1590,78 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, return ret; } +static int +mlx5dr_action_create_push_pop_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + switch (action->type) { + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + ret = mlx5dr_action_handle_ipv6_routing_pop(ctx, action); + break; + + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + *((uint8_t *)data) = 0; + ret = mlx5dr_action_handle_ipv6_routing_push(ctx, data_sz, data, + bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +static struct mlx5dr_action * +mlx5dr_action_create_push_pop(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "IPv6 routing push/pop is not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Push/pop flags don't fit HWS (flags: %x)\n", flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_push_pop_hws(ctx, data_sz, inline_data, + log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create push/pop HWS.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + struct mlx5dr_action * mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, enum mlx5dr_action_reformat_type reformat_type, @@ -1540,6 +1718,169 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, return NULL; } +static int +mlx5dr_action_create_recom_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + struct mlx5_modification_cmd cmd[MLX5_MHDR_MAX_CMD]; + void *eth_dev = flow_hw_get_dev_from_ctx(ctx); + struct mlx5dr_action *sub_action; + int ret; + + if (eth_dev == NULL) { + DR_LOG(ERR, "Invalid dev handle for recombination action"); + rte_errno = EINVAL; + return rte_errno; + } + memset(cmd, 0, sizeof(cmd)); + switch (action->type) { + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + ret = flow_dv_generate_ipv6_routing_pop_mhdr1(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing pop action1 pattern"); + rte_errno = EINVAL; + return rte_errno; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action1"); + rte_errno = EINVAL; + return rte_errno; + } + action->recom.action1 = sub_action; + memset(cmd, 0, sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD); + ret = flow_dv_generate_ipv6_routing_pop_mhdr2(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing pop action2 pattern"); + goto err; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action2"); + goto err; + } + action->recom.action2 = sub_action; + sub_action = mlx5dr_action_create_push_pop(ctx, + MLX5DR_ACTION_TYP_IPV6_ROUTING_POP, + data_sz, data, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action3"); + goto err; + } + action->recom.action3 = sub_action; + break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + ret = flow_dv_generate_ipv6_routing_push_mhdr1(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing push action2 pattern"); + rte_errno = EINVAL; + return rte_errno; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags | MLX5DR_ACTION_FLAG_SHARED); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action2"); + rte_errno = EINVAL; + return rte_errno; + } + action->recom.action2 = sub_action; + memset(cmd, 0, sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD); + ret = flow_dv_generate_ipv6_routing_push_mhdr2(eth_dev, NULL, cmd, + MLX5_MHDR_MAX_CMD, data); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing push action3 pattern"); + goto err; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action3"); + goto err; + } + action->recom.action3 = sub_action; + sub_action = mlx5dr_action_create_push_pop(ctx, + MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH, + data_sz, data, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action1"); + goto err; + } + action->recom.action1 = sub_action; + break; + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return 0; + +err: + if (action->recom.action1) + mlx5dr_action_destroy(action->recom.action1); + if (action->recom.action2) + mlx5dr_action_destroy(action->recom.action2); + if (action->recom.action3) + mlx5dr_action_destroy(action->recom.action3); + rte_errno = EINVAL; + return rte_errno; +} + +struct mlx5dr_action * +mlx5dr_action_create_recombination(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Recom flags don't fit HWS (flags: %x)\n", flags); + rte_errno = EINVAL; + goto free_action; + } + + if (action_type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP && log_bulk_size) { + DR_LOG(ERR, "IPv6 POP must be shared"); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_recom_hws(ctx, data_sz, inline_data, + log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create recombination.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + static int mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, size_t actions_sz, @@ -1677,6 +2018,43 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(action); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + if (action->recom.action1) { + mlx5dr_action_destroy_stcs(action->recom.action1); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action1->ctx, + action->recom.action1); + simple_free(action->recom.action1); + } + if (action->recom.action2) { + mlx5dr_action_destroy_stcs(action->recom.action2); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action2->ctx, + action->recom.action2); + simple_free(action->recom.action2); + } + if (action->recom.action3) { + mlx5dr_action_destroy_stcs(action->recom.action3); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action3->ctx, + action->recom.action3); + simple_free(action->recom.action3); + } + break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + if (action->recom.action1) { + mlx5dr_action_destroy_stcs(action->recom.action1); + mlx5dr_cmd_destroy_obj(action->recom.action1->reformat.arg_obj); + simple_free(action->recom.action1); + } + if (action->recom.action2) { + mlx5dr_action_destroy_stcs(action->recom.action2); + simple_free(action->recom.action2); + } + if (action->recom.action3) { + mlx5dr_action_destroy_stcs(action->recom.action3); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action3->ctx, + action->recom.action3); + simple_free(action->recom.action3); + } + break; } } diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 17619c0057..cb51f81da1 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -130,6 +130,11 @@ struct mlx5dr_action { struct mlx5dr_devx_obj *arg_obj; uint32_t header_size; } reformat; + struct { + struct mlx5dr_action *action1; + struct mlx5dr_action *action2; + struct mlx5dr_action *action3; + } recom; struct { struct mlx5dr_devx_obj *devx_obj; uint8_t return_reg_id; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index b8049a173d..1a6ad4dd71 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -22,6 +22,8 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", + [MLX5DR_ACTION_TYP_IPV6_ROUTING_POP] = "POP_IPV6_ROUTING", + [MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH] = "PUSH_IPV6_ROUTING", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 7/8] net/mlx5/hws: add setter for IPv6 routing push pop 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu ` (5 preceding siblings ...) 2023-04-17 9:25 ` [PATCH v1 6/8] net/mlx5/hws: add IPv6 routing extension push pop actions Rongwei Liu @ 2023-04-17 9:25 ` Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 8/8] net/mlx5: implement " Rongwei Liu 7 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas The rte action will be translated to 3 dr_actions which need 3 setters to program them. For each setter, it may have different reparsing properties. Setter which requires no reparse can't share the same one with the one has reparse enabled even if there is spare space. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 144 +++++++++++++++++++++++++++ 1 file changed, 144 insertions(+) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index fa38654644..9f2386479a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -2318,6 +2318,57 @@ mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply, } } +static void +mlx5dr_action_setter_ipv6_routing_pop1(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + MLX5_ASSERT(action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP); + rule_action->action = action->recom.action1; + MLX5_ASSERT(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED); + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_MODIFY_HDR); + mlx5dr_action_setter_modify_header(apply, setter); + rule_action->action = action; +} + +static void +mlx5dr_action_setter_ipv6_routing_pop2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + MLX5_ASSERT(action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP); + rule_action->action = action->recom.action2; + MLX5_ASSERT(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED); + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_MODIFY_HDR); + mlx5dr_action_setter_modify_header(apply, setter); + rule_action->action = action; +} + +static void +mlx5dr_action_setter_ipv6_routing_pop3(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + MLX5_ASSERT(action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP); + rule_action->action = action->recom.action3; + MLX5_ASSERT(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED); + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP); + mlx5dr_action_setter_modify_header(apply, setter); + rule_action->action = action; +} + static void mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, struct mlx5dr_actions_wqe_setter *setter) @@ -2346,6 +2397,60 @@ mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, } } +static void +mlx5dr_action_setter_ipv6_routing_push1(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_rule_action tmp; + + rule_action = &apply->rule_action[setter->idx_double]; + tmp = *rule_action; + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH); + rule_action->action = tmp.action->recom.action1; + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH); + rule_action->reformat.offset = tmp.recom.offset; + rule_action->reformat.data = tmp.recom.data; + mlx5dr_action_setter_insert_ptr(apply, setter); + *rule_action = tmp; +} + +static void +mlx5dr_action_setter_ipv6_routing_push2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + MLX5_ASSERT(action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH); + rule_action->action = action->recom.action2; + MLX5_ASSERT(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED); + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_MODIFY_HDR); + mlx5dr_action_setter_modify_header(apply, setter); + rule_action->action = action; +} + +static void +mlx5dr_action_setter_ipv6_routing_push3(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_rule_action tmp; + + rule_action = &apply->rule_action[setter->idx_double]; + tmp = *rule_action; + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH); + rule_action->action = tmp.action->recom.action3; + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_MODIFY_HDR); + rule_action->modify_header.offset = tmp.recom.offset; + rule_action->modify_header.data = tmp.recom.mhdr; + MLX5_ASSERT(rule_action->action->modify_header.num_of_actions > 1); + mlx5dr_action_setter_modify_header(apply, setter); + *rule_action = tmp; +} + static void mlx5dr_action_setter_tnl_l3_to_l2(struct mlx5dr_actions_apply_data *apply, struct mlx5dr_actions_wqe_setter *setter) @@ -2553,6 +2658,45 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_double = i; break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + /* Double internal modify header list */ + setter = mlx5dr_action_setter_find_first(last_setter, + ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_pop1; + setter->idx_double = i; + setter++; + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_pop2; + setter->idx_double = i; + setter++; + /* restore IPv6 protocol + pop via modify list. */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_pop3; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + /* Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + /* Can't squeeze with reparsing setter */ + if (setter->flags & ASF_REPARSE) + setter++; + setter->flags |= ASF_DOUBLE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_push1; + setter->idx_double = i; + setter++; + /* Set IPv6 protocol to 0x2b */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_push2; + setter->idx_double = i; + setter++; + /* Load next hop IPv6 address and restore srv6.next_hdr */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_push3; + setter->idx_double = i; + break; + case MLX5DR_ACTION_TYP_MODIFY_HDR: /* Double modify header list */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
* [PATCH v1 8/8] net/mlx5: implement IPv6 routing push pop 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu ` (6 preceding siblings ...) 2023-04-17 9:25 ` [PATCH v1 7/8] net/mlx5/hws: add setter for IPv6 routing push pop Rongwei Liu @ 2023-04-17 9:25 ` Rongwei Liu 7 siblings, 0 replies; 64+ messages in thread From: Rongwei Liu @ 2023-04-17 9:25 UTC (permalink / raw) To: dev, matan, viacheslavo, orika, thomas Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Pop actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu <rongweil@nvidia.com> --- doc/guides/nics/mlx5.rst | 9 +- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 25 ++- drivers/net/mlx5/mlx5_flow_hw.c | 268 ++++++++++++++++++++++++++++++-- 4 files changed, 291 insertions(+), 12 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 7a137d5f6a..11b7864d23 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -162,7 +162,7 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. - +- Push or remove IPv6 routing extension. Limitations ----------- @@ -694,6 +694,13 @@ Limitations The flow engine of a process cannot move from active to standby mode if preceding active application rules are still present and vice versa. +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + Statistics ---------- diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 2cb6364957..5c568070a3 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -364,6 +364,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 821c6ca281..97dc7c3b4d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -311,6 +311,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_SEND_TO_KERNEL (1ull << 42) #define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43) #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_POP (1ull << 45) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 46) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -538,6 +540,7 @@ struct mlx5_flow_dv_matcher { struct mlx5_flow_dv_match_params mask; /**< Matcher mask. */ }; +#define MLX5_PUSH_MAX_LEN 128 #define MLX5_ENCAP_MAX_LEN 132 /* Encap/decap resource structure. */ @@ -1167,6 +1170,8 @@ struct rte_flow_hw { #pragma GCC diagnostic error "-Wpedantic" #endif +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1211,6 +1216,12 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 routing push data len. */ + uint16_t len; + /* Modify header actions to keep valid checksum. */ + struct mlx5_modification_cmd cmd[MLX5_MHDR_MAX_CMD]; + } recom; struct { uint32_t id; } shared_meter; @@ -1253,6 +1264,7 @@ struct rte_flow_actions_template { uint16_t *actions_off; /* DR action offset for given rte action offset. */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push pop action. */ uint32_t refcnt; /* Reference counter. */ uint16_t rx_cpy_pos; /* Action position of Rx metadata to be copied. */ uint8_t flex_item; /* flex item index. */ @@ -1275,7 +1287,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push pop action struct. */ +struct mlx5_hw_push_pop_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_pop action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1304,6 +1323,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/Pop action. */ + struct mlx5_hw_push_pop_action *push_pop; + uint16_t push_pop_pos; /* Push/Pop action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ @@ -1329,7 +1351,6 @@ struct mlx5_flow_group { uint32_t idx; /* Group memory index. */ }; - #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32 diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7e0ee8d883..d6b2953d55 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -479,6 +479,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_pop) { + if (acts->push_pop->action) + mlx5dr_action_destroy(acts->push_pop->action); + mlx5_free(acts->push_pop); + acts->push_pop = NULL; + } if (acts->mhdr) { if (acts->mhdr->action) mlx5dr_action_destroy(acts->mhdr->action); @@ -601,6 +607,53 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * @param[in] buf + * Data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len, uint8_t *buf) +{ + struct mlx5_modification_cmd cmd[MLX5_MHDR_MAX_CMD]; + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + int ret; + + memset(cmd, 0, sizeof(cmd)); + ret = flow_dv_generate_ipv6_routing_push_mhdr2(dev, NULL, cmd, MLX5_MHDR_MAX_CMD, buf); + if (ret < 0) + return NULL; + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->recom.len = len; + memcpy(act_data->recom.cmd, cmd, ret * sizeof(struct mlx5_modification_cmd)); + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1359,20 +1412,25 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *action_start = actions; struct rte_flow_action *masks = at->masks; - enum mlx5dr_action_reformat_type refmt_type = 0; + enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = (enum mlx5dr_action_type)0; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t action_pos; uint16_t jump_pos; @@ -1564,6 +1622,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH; + if (masks) { + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + } + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + recom_src = actions - action_start; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_POP; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: DRV_LOG(ERR, "send to kernel action is not supported in HW steering."); goto err; @@ -1767,6 +1855,47 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, acts->encap_decap->shared = shared_rfmt; acts->encap_decap_pos = at->reformat_off; } + if (recom_used) { + struct mlx5_action_construct_data *act_data; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + if (push_data && !push_data_m) + bulk = rte_log2_u32(table_attr->nb_flows); + else + flag |= MLX5DR_ACTION_FLAG_SHARED; + + MLX5_ASSERT(at->recom_off != UINT16_MAX); + acts->push_pop = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_pop) + push_size, 0, SOCKET_ID_ANY); + if (!acts->push_pop) + goto err; + if (push_data && push_size) { + acts->push_pop->data_size = push_size; + memcpy(acts->push_pop->data, push_data, push_size); + } + acts->push_pop->action = mlx5dr_action_create_recombination(priv->dr_ctx, + recom_type, push_size, push_data, bulk, flag); + if (!acts->push_pop->action) + goto err; + acts->rule_acts[at->recom_off].action = acts->push_pop->action; + acts->rule_acts[at->recom_off].recom.data = acts->push_pop->data; + acts->rule_acts[at->recom_off].recom.offset = 0; + acts->push_pop->shared = flag & MLX5DR_ACTION_FLAG_SHARED; + acts->push_pop_pos = at->recom_off; + if (!acts->push_pop->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size, + acts->push_pop->data); + if (!act_data) + goto err; + /* Clear srv6 next header */ + *acts->push_pop->data = 0; + acts->rule_acts[at->recom_off].recom.mhdr = + (uint8_t *)act_data->recom.cmd; + } + } return 0; err: err = rte_errno; @@ -2143,11 +2272,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2273,6 +2404,12 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, act_data->recom.len); + MLX5_ASSERT(ipv6_push->size == act_data->recom.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -2428,6 +2565,32 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_pop && !hw_acts->push_pop->shared) { + struct mlx5_modification_cmd *mhdr; + uint32_t data_ofs, rule_data; + int i; + + rule_acts[hw_acts->push_pop_pos].recom.offset = + job->flow->idx - 1; + mhdr = (struct mlx5_modification_cmd *)rule_acts + [hw_acts->push_pop_pos].recom.mhdr; + /* Modify IPv6 dst address is in reverse order. */ + data_ofs = sizeof(struct rte_ipv6_routing_ext) + *(push_buf + 3) * 16; + data_ofs -= sizeof(uint32_t); + /* next_hop address. */ + for (i = 0; i < 4; i++) { + rule_data = flow_dv_fetch_field(push_buf + data_ofs, + sizeof(uint32_t)); + mhdr[i].data1 = rte_cpu_to_be_32(rule_data); + data_ofs -= sizeof(uint32_t); + } + /* next_hdr */ + rule_data = flow_dv_fetch_field(push_buf, sizeof(uint8_t)); + mhdr[i].data1 = rte_cpu_to_be_32(rule_data); + /* clear next_hdr for insert. */ + *push_buf = 0; + rule_acts[hw_acts->push_pop_pos].recom.data = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -3864,6 +4027,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -4046,6 +4241,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, uint16_t i; bool actions_end = false; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -4122,6 +4318,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_POP; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -4229,6 +4440,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_CONNTRACK] = MLX5DR_ACTION_TYP_ASO_CT, [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_IPV6_ROUTING_POP, }; static int @@ -4285,6 +4498,8 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -4293,7 +4508,8 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -4302,8 +4518,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_POP; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -4332,6 +4551,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_POP; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -4404,11 +4633,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for pop anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP || + recom_type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -4706,6 +4949,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, at->actions_off[i] = UINT16_MAX; at->reformat_off = UINT16_MAX; at->mhdr_off = UINT16_MAX; + at->recom_off = UINT16_MAX; at->rx_cpy_pos = pos; /* * mlx5 PMD hacks indirect action index directly to the action conf. @@ -4734,7 +4978,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, } } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -4779,6 +5023,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->tmpl && mlx5dr_action_template_contain_srv6(template->tmpl)) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -7230,6 +7476,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -7244,7 +7491,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; @@ -7263,11 +7510,14 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; priv->hw_q[i].job[j] = &job[j]; } -- 2.27.0 ^ permalink raw reply [flat|nested] 64+ messages in thread
end of thread, other threads:[~2023-11-02 13:44 UTC | newest] Thread overview: 64+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension push remove Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 1/8] ethdev: add IPv6 extension push remove action Rongwei Liu 2023-05-24 6:55 ` Ori Kam 2023-05-24 7:39 ` [PATCH v1 0/2] add IPv6 extension push remove Rongwei Liu 2023-05-24 7:39 ` [PATCH v1 1/2] ethdev: add IPv6 extension push remove action Rongwei Liu 2023-05-24 10:30 ` Ori Kam 2023-05-24 7:39 ` [PATCH v1 2/2] app/testpmd: add IPv6 extension push remove cli Rongwei Liu 2023-06-02 14:39 ` [PATCH v1 0/2] add IPv6 extension push remove Ferruh Yigit 2023-07-10 2:32 ` Rongwei Liu 2023-07-10 8:55 ` Ferruh Yigit 2023-07-10 14:41 ` Stephen Hemminger 2023-07-11 6:16 ` Thomas Monjalon 2023-09-19 8:12 ` [PATCH v3] net/mlx5: add test for live migration Rongwei Liu 2023-10-16 8:19 ` Thomas Monjalon 2023-10-16 8:25 ` Rongwei Liu 2023-10-16 9:26 ` Rongwei Liu 2023-10-16 9:26 ` Thomas Monjalon 2023-10-16 9:29 ` Rongwei Liu 2023-10-25 9:07 ` Rongwei Liu 2023-10-16 9:22 ` [PATCH v4] " Rongwei Liu 2023-10-25 9:36 ` [PATCH v5] " Rongwei Liu 2023-10-25 9:41 ` Thomas Monjalon 2023-10-25 9:45 ` [PATCH v6] " Rongwei Liu 2023-10-25 9:48 ` [PATCH v5] " Rongwei Liu 2023-10-25 9:50 ` [PATCH v7] " Rongwei Liu 2023-10-25 13:10 ` Thomas Monjalon 2023-10-26 8:15 ` Raslan Darawsheh 2023-04-17 9:25 ` [PATCH v1 2/8] app/testpmd: add IPv6 extension push remove cli Rongwei Liu 2023-05-24 7:06 ` Ori Kam 2023-04-17 9:25 ` [PATCH v1 3/8] net/mlx5/hws: add no reparse support Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 4/8] net/mlx5: sample the srv6 last segment Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 2/6] net/mlx5/hws: fix potential wrong errno value Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 5/6] net/mlx5: implement " Rongwei Liu 2023-10-31 9:42 ` [PATCH v2 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 2/6] net/mlx5/hws: fix potential wrong errno value Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 5/6] net/mlx5: implement " Rongwei Liu 2023-10-31 10:51 ` [PATCH v3 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 01/13] net/mlx5/hws: support insert header action Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 02/13] net/mlx5/hws: support remove " Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 03/13] net/mlx5/hws: allow jump to TIR over FDB Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 04/13] net/mlx5/hws: support dynamic re-parse Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 05/13] net/mlx5/hws: dynamic re-parse for modify header Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 06/13] net/mlx5/hws: fix incorrect re-parse on complex rules Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 07/13] net/mlx5: sample the srv6 last segment Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 08/13] net/mlx5/hws: fix potential wrong rte_errno value Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 09/13] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 10/13] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 11/13] net/mlx5: implement " Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 12/13] net/mlx5/hws: fix srv6 push compilation failure Rongwei Liu 2023-11-01 4:44 ` [PATCH v4 13/13] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu 2023-11-02 13:44 ` [PATCH v4 00/13] support IPv6 push remove action Raslan Darawsheh 2023-04-17 9:25 ` [PATCH v1 5/8] net/mlx5: generate srv6 modify header resource Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 6/8] net/mlx5/hws: add IPv6 routing extension push pop actions Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 7/8] net/mlx5/hws: add setter for IPv6 routing push pop Rongwei Liu 2023-04-17 9:25 ` [PATCH v1 8/8] net/mlx5: implement " Rongwei Liu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).