From: Alex Vesker <valex@nvidia.com>
To: "Leo Xu (Networking SW)" <yongquanx@nvidia.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: Matan Azrad <matan@nvidia.com>, Slava Ovsiienko <viacheslavo@nvidia.com>
Subject: RE: [PATCH v3 3/3] net/mlx5/hws: add ICMPv6 ID and sequence match support
Date: Tue, 7 Feb 2023 13:05:37 +0000 [thread overview]
Message-ID: <DM4PR12MB5150017E507180643D5CCF09CEDB9@DM4PR12MB5150.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20230205134154.408984-4-yongquanx@nvidia.com>
Hi,
> -----Original Message-----
> From: Leo Xu <yongquanx@nvidia.com>
> Sent: Sunday, 5 February 2023 15:42
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>
> Subject: [PATCH v3 3/3] net/mlx5/hws: add ICMPv6 ID and sequence match
> support
>
> This patch adds ICMPv6 ID and sequence match support for HWS.
> Since type and code of ICMPv6 echo is already specified by ITEM type:
> RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST
> RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY
> mlx5 pmd will set appropriate type and code automatically:
> Echo request: type(128), code(0)
> Echo reply: type(129), code(0)
> type and code provided by application will be ignored
>
> Signed-off-by: Leo Xu <yongquanx@nvidia.com>
> ---
> drivers/net/mlx5/hws/mlx5dr_definer.c | 88 +++++++++++++++++++++++++++
> 1 file changed, 88 insertions(+)
>
> diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c
> b/drivers/net/mlx5/hws/mlx5dr_definer.c
> index 6b98eb8c96..d56e85631d 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_definer.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
> @@ -368,6 +368,47 @@ mlx5dr_definer_icmp6_dw1_set(struct
> mlx5dr_definer_fc *fc,
> DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); }
>
> +static void
> +mlx5dr_definer_icmp6_echo_dw1_mask_set(struct mlx5dr_definer_fc *fc,
> + __rte_unused const void *item_spec,
> + uint8_t *tag)
> +{
> + const struct rte_flow_item_icmp6 spec = {0xFF, 0xFF, 0x0};
> + mlx5dr_definer_icmp6_dw1_set(fc, &spec, tag); }
> +
> +static void
> +mlx5dr_definer_icmp6_echo_request_dw1_set(struct mlx5dr_definer_fc *fc,
> + __rte_unused const void *item_spec,
> + uint8_t *tag)
> +{
> + const struct rte_flow_item_icmp6 spec = {RTE_ICMP6_ECHO_REQUEST,
> 0, 0};
> + mlx5dr_definer_icmp6_dw1_set(fc, &spec, tag); }
> +
> +static void
> +mlx5dr_definer_icmp6_echo_reply_dw1_set(struct mlx5dr_definer_fc *fc,
> + __rte_unused const void *item_spec,
> + uint8_t *tag)
> +{
> + const struct rte_flow_item_icmp6 spec = {RTE_ICMP6_ECHO_REPLY, 0,
> 0};
> + mlx5dr_definer_icmp6_dw1_set(fc, &spec, tag); }
> +
> +static void
> +mlx5dr_definer_icmp6_echo_dw2_set(struct mlx5dr_definer_fc *fc,
> + const void *item_spec,
> + uint8_t *tag)
> +{
> + const struct rte_flow_item_icmp6_echo *v = item_spec;
> + rte_be32_t dw2;
> +
> + dw2 = (rte_be_to_cpu_16(v->hdr.identifier) <<
> __mlx5_dw_bit_off(header_icmp, ident)) |
> + (rte_be_to_cpu_16(v->hdr.sequence) <<
> +__mlx5_dw_bit_off(header_icmp, seq_nb));
> +
> + DR_SET(tag, dw2, fc->byte_off, fc->bit_off, fc->bit_mask); }
> +
> static void
> mlx5dr_definer_ipv6_flow_label_set(struct mlx5dr_definer_fc *fc,
> const void *item_spec,
> @@ -1441,6 +1482,48 @@ mlx5dr_definer_conv_item_icmp6(struct
> mlx5dr_definer_conv_data *cd,
> return 0;
> }
>
> +static int
> +mlx5dr_definer_conv_item_icmp6_echo(struct mlx5dr_definer_conv_data *cd,
> + struct rte_flow_item *item,
> + int item_idx)
> +{
> + const struct rte_flow_item_icmp6_echo *m = item->mask;
> + struct mlx5dr_definer_fc *fc;
> + bool inner = cd->tunnel;
> +
> + if (!cd->relaxed) {
> + /* Overwrite match on L4 type ICMP6 */
> + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)];
> + fc->item_idx = item_idx;
> + fc->tag_set = &mlx5dr_definer_icmp_protocol_set;
> + fc->tag_mask_set = &mlx5dr_definer_ones_set;
> + DR_CALC_SET(fc, eth_l2, l4_type, inner);
> +
> + /* Set fixed type and code for icmp6 echo request/reply */
> + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1];
> + fc->item_idx = item_idx;
> + fc->tag_mask_set =
> &mlx5dr_definer_icmp6_echo_dw1_mask_set;
> + if (item->type ==
> RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST)
> + fc->tag_set =
> &mlx5dr_definer_icmp6_echo_request_dw1_set;
> + else /* RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY */
> + fc->tag_set =
> &mlx5dr_definer_icmp6_echo_reply_dw1_set;
> + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1);
> + }
> +
> + if (!m)
> + return 0;
> +
> + /* Set identifier & sequence into icmp_dw2 */
> + if (m->hdr.identifier || m->hdr.sequence) {
> + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW2];
> + fc->item_idx = item_idx;
> + fc->tag_set = &mlx5dr_definer_icmp6_echo_dw2_set;
> + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw2);
> + }
> +
> + return 0;
> +}
> +
> static int
> mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd,
> struct rte_flow_item *item,
> @@ -1577,6 +1660,11 @@ mlx5dr_definer_conv_items_to_hl(struct
> mlx5dr_context *ctx,
> ret = mlx5dr_definer_conv_item_icmp6(&cd, items, i);
> item_flags |= MLX5_FLOW_LAYER_ICMP6;
> break;
> + case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST:
> + case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY:
> + ret = mlx5dr_definer_conv_item_icmp6_echo(&cd,
> items, i);
> + item_flags |= MLX5_FLOW_LAYER_ICMP6;
> + break;
> case RTE_FLOW_ITEM_TYPE_METER_COLOR:
> ret = mlx5dr_definer_conv_item_meter_color(&cd,
> items, i);
> item_flags |= MLX5_FLOW_ITEM_METER_COLOR;
> --
> 2.27.0
Acked-by: Alex Vesker <valex@nvidia.com>
next prev parent reply other threads:[~2023-02-07 13:05 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-12 8:59 [PATCH 0/3] support match icmpv6 id and sequence Leo Xu
2022-12-12 8:59 ` [PATCH 1/3] ethdev: add ICMPv6 " Leo Xu
2022-12-12 8:59 ` [PATCH 2/3] net/mlx5: add ICMPv6 id and sequence match support Leo Xu
2022-12-12 8:59 ` [PATCH 3/3] net/mlx5/hws: " Leo Xu
2022-12-20 7:44 ` [PATCH v2 0/3] support match icmpv6 ID and sequence Leo Xu
2022-12-20 7:44 ` [PATCH v2 1/3] ethdev: add ICMPv6 " Leo Xu
2023-01-03 8:17 ` Ori Kam
2023-01-18 9:30 ` Thomas Monjalon
2023-01-31 6:53 ` Leo Xu (Networking SW)
2023-02-01 9:56 ` Thomas Monjalon
2023-02-02 18:33 ` Leo Xu (Networking SW)
2023-02-02 21:23 ` Thomas Monjalon
2023-02-03 2:56 ` Leo Xu (Networking SW)
2023-01-26 10:45 ` Ferruh Yigit
2023-01-31 3:58 ` Leo Xu (Networking SW)
2022-12-20 7:44 ` [PATCH v2 2/3] net/mlx5: add ICMPv6 ID and sequence match support Leo Xu
2023-01-18 8:55 ` Thomas Monjalon
2023-01-31 6:57 ` Leo Xu (Networking SW)
2022-12-20 7:44 ` [PATCH v2 3/3] net/mlx5/hws: " Leo Xu
2023-01-18 8:58 ` Thomas Monjalon
2023-01-31 6:56 ` Leo Xu (Networking SW)
2023-01-26 10:47 ` [PATCH v2 0/3] support match icmpv6 ID and sequence Ferruh Yigit
2023-01-31 3:54 ` Leo Xu (Networking SW)
2023-02-05 13:41 ` [PATCH v3 " Leo Xu
2023-02-05 13:41 ` [PATCH v3 1/3] ethdev: add ICMPv6 " Leo Xu
2023-02-05 13:41 ` [PATCH v3 2/3] net/mlx5: add ICMPv6 ID and sequence match support Leo Xu
2023-02-07 13:48 ` Slava Ovsiienko
2023-02-05 13:41 ` [PATCH v3 3/3] net/mlx5/hws: " Leo Xu
2023-02-07 13:05 ` Alex Vesker [this message]
2023-02-07 13:49 ` Slava Ovsiienko
2023-02-09 13:04 ` [PATCH v3 0/3] support match icmpv6 ID and sequence Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM4PR12MB5150017E507180643D5CCF09CEDB9@DM4PR12MB5150.namprd12.prod.outlook.com \
--to=valex@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=viacheslavo@nvidia.com \
--cc=yongquanx@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).