* [PATCH 0/2] mlx5 supports InfiniBand BTH item match
@ 2023-06-01 7:59 Dong Zhou
2023-06-01 7:59 ` [PATCH 1/2] net/mlx5: add support for infiniband BTH match Dong Zhou
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Dong Zhou @ 2023-06-01 7:59 UTC (permalink / raw)
To: orika, valex, viacheslavo, thomas; +Cc: dev, rasland
mlx5 supports InfiniBand BTH item match by SWS and HWS.
depends-on: http://patches.dpdk.org/project/dpdk/patch/20230531032653.3037946-1-dongzhou@nvidia.com/ ("ethdev: add flow item for RoCE infiniband BTH")
Dong Zhou (2):
net/mlx5: add support for infiniband BTH match
net/mlx5/hws: add support for infiniband BTH match
drivers/common/mlx5/mlx5_prm.h | 5 +-
drivers/net/mlx5/hws/mlx5dr_definer.c | 76 ++++++++++++++++++-
drivers/net/mlx5/hws/mlx5dr_definer.h | 2 +
drivers/net/mlx5/mlx5_flow.h | 6 ++
drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow_hw.c | 1 +
6 files changed, 189 insertions(+), 3 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/2] net/mlx5: add support for infiniband BTH match
2023-06-01 7:59 [PATCH 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
@ 2023-06-01 7:59 ` Dong Zhou
2023-06-01 8:00 ` [PATCH 2/2] net/mlx5/hws: " Dong Zhou
2023-06-06 11:07 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
2 siblings, 0 replies; 8+ messages in thread
From: Dong Zhou @ 2023-06-01 7:59 UTC (permalink / raw)
To: orika, valex, viacheslavo, thomas, Matan Azrad, Suanming Mou; +Cc: dev, rasland
This patch adds support to match opcode and dst_qp fields in
infiniband BTH. Currently, only the RoCEv2 packet is supported,
the input BTH match item is defaulted to match one RoCEv2 packet.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 5 +-
drivers/net/mlx5/mlx5_flow.h | 6 ++
drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++++++++
3 files changed, 111 insertions(+), 2 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index ed3d5efbb7..8f55fd59b3 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -932,7 +932,7 @@ struct mlx5_ifc_fte_match_set_misc_bits {
u8 gre_key_h[0x18];
u8 gre_key_l[0x8];
u8 vxlan_vni[0x18];
- u8 reserved_at_b8[0x8];
+ u8 bth_opcode[0x8];
u8 geneve_vni[0x18];
u8 lag_rx_port_affinity[0x4];
u8 reserved_at_e8[0x2];
@@ -945,7 +945,8 @@ struct mlx5_ifc_fte_match_set_misc_bits {
u8 reserved_at_120[0xa];
u8 geneve_opt_len[0x6];
u8 geneve_protocol_type[0x10];
- u8 reserved_at_140[0x20];
+ u8 reserved_at_140[0x8];
+ u8 bth_dst_qp[0x18];
u8 inner_esp_spi[0x20];
u8 outer_esp_spi[0x20];
u8 reserved_at_1a0[0x60];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 1d116ea0f6..c1d6a71708 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -227,6 +227,9 @@ enum mlx5_feature_name {
/* Aggregated affinity item */
#define MLX5_FLOW_ITEM_AGGR_AFFINITY (UINT64_C(1) << 49)
+/* IB BTH ITEM. */
+#define MLX5_FLOW_ITEM_IB_BTH (1ull << 51)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -364,6 +367,9 @@ enum mlx5_feature_name {
#define MLX5_UDP_PORT_VXLAN 4789
#define MLX5_UDP_PORT_VXLAN_GPE 4790
+/* UDP port numbers for RoCEv2. */
+#define MLX5_UDP_PORT_ROCEv2 4791
+
/* UDP port numbers for GENEVE. */
#define MLX5_UDP_PORT_GENEVE 6081
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index d14661298c..a3b72dbb5f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7193,6 +7193,65 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Validate IB BTH item.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] udp_dport
+ * UDP destination port
+ * @param[in] item
+ * Item specification.
+ * @param root
+ * Whether action is on root table.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_flow_validate_item_ib_bth(struct rte_eth_dev *dev,
+ uint16_t udp_dport,
+ const struct rte_flow_item *item,
+ bool root,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ib_bth *mask = item->mask;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_ib_bth *valid_mask;
+ int ret;
+
+ valid_mask = &rte_flow_item_ib_bth_mask;
+ if (udp_dport && udp_dport != MLX5_UDP_PORT_ROCEv2)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "protocol filtering not compatible"
+ " with UDP layer");
+ if (mask && (mask->hdr.se || mask->hdr.m || mask->hdr.padcnt ||
+ mask->hdr.tver || mask->hdr.pkey || mask->hdr.f || mask->hdr.b ||
+ mask->hdr.rsvd0 || mask->hdr.a || mask->hdr.rsvd1 ||
+ mask->hdr.psn[0] || mask->hdr.psn[1] || mask->hdr.psn[2]))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "only opcode and dst_qp are supported");
+ if (root || priv->sh->steering_format_version ==
+ MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "IB BTH item is not supported");
+ if (!mask)
+ mask = &rte_flow_item_ib_bth_mask;
+ ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ (const uint8_t *)valid_mask,
+ sizeof(struct rte_flow_item_ib_bth),
+ MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
/**
* Internal validation function. For validating both actions and items.
*
@@ -7700,6 +7759,14 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
last_item = MLX5_FLOW_ITEM_AGGR_AFFINITY;
break;
+ case RTE_FLOW_ITEM_TYPE_IB_BTH:
+ ret = mlx5_flow_validate_item_ib_bth(dev, udp_dport,
+ items, is_root, error);
+ if (ret < 0)
+ return ret;
+
+ last_item = MLX5_FLOW_ITEM_IB_BTH;
+ break;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -10971,6 +11038,37 @@ flow_dv_translate_item_aggr_affinity(void *key,
affinity_v->affinity & affinity_m->affinity);
}
+static void
+flow_dv_translate_item_ib_bth(void *key,
+ const struct rte_flow_item *item,
+ int inner, uint32_t key_type)
+{
+ const struct rte_flow_item_ib_bth *bth_m;
+ const struct rte_flow_item_ib_bth *bth_v;
+ void *headers_v, *misc_v;
+ uint16_t udp_dport;
+ char *qpn_v;
+ int i, size;
+
+ headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) :
+ MLX5_ADDR_OF(fte_match_param, key, outer_headers);
+ if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) {
+ udp_dport = key_type & MLX5_SET_MATCHER_M ?
+ 0xFFFF : MLX5_UDP_PORT_ROCEv2;
+ MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, udp_dport);
+ }
+ if (MLX5_ITEM_VALID(item, key_type))
+ return;
+ MLX5_ITEM_UPDATE(item, key_type, bth_v, bth_m, &rte_flow_item_ib_bth_mask);
+ misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+ MLX5_SET(fte_match_set_misc, misc_v, bth_opcode,
+ bth_v->hdr.opcode & bth_m->hdr.opcode);
+ qpn_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, bth_dst_qp);
+ size = sizeof(bth_m->hdr.dst_qp);
+ for (i = 0; i < size; ++i)
+ qpn_v[i] = bth_m->hdr.dst_qp[i] & bth_v->hdr.dst_qp[i];
+}
+
static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
#define HEADER_IS_ZERO(match_criteria, headers) \
@@ -13772,6 +13870,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
flow_dv_translate_item_aggr_affinity(key, items, key_type);
last_item = MLX5_FLOW_ITEM_AGGR_AFFINITY;
break;
+ case RTE_FLOW_ITEM_TYPE_IB_BTH:
+ flow_dv_translate_item_ib_bth(key, items, tunnel, key_type);
+ last_item = MLX5_FLOW_ITEM_IB_BTH;
+ break;
default:
break;
}
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 2/2] net/mlx5/hws: add support for infiniband BTH match
2023-06-01 7:59 [PATCH 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
2023-06-01 7:59 ` [PATCH 1/2] net/mlx5: add support for infiniband BTH match Dong Zhou
@ 2023-06-01 8:00 ` Dong Zhou
2023-06-06 11:07 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
2 siblings, 0 replies; 8+ messages in thread
From: Dong Zhou @ 2023-06-01 8:00 UTC (permalink / raw)
To: orika, valex, viacheslavo, thomas, Matan Azrad, Suanming Mou; +Cc: dev, rasland
This patch adds support to match opcode and dst_qp fields in
infiniband BTH. Currently, only the RoCEv2 packet is supported,
the input BTH match item is defaulted to match one RoCEv2 packet.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Acked-by: Alex Vesker <valex@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 76 ++++++++++++++++++++++++++-
drivers/net/mlx5/hws/mlx5dr_definer.h | 2 +
drivers/net/mlx5/mlx5_flow_hw.c | 1 +
3 files changed, 78 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index f92d3e8e1f..1a427c9b64 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -10,6 +10,7 @@
#define ETH_TYPE_IPV6_VXLAN 0x86DD
#define ETH_VXLAN_DEFAULT_PORT 4789
#define IP_UDP_PORT_MPLS 6635
+#define UDP_ROCEV2_PORT 4791
#define DR_FLOW_LAYER_TUNNEL_NO_MPLS (MLX5_FLOW_LAYER_TUNNEL & ~MLX5_FLOW_LAYER_MPLS)
#define STE_NO_VLAN 0x0
@@ -171,7 +172,9 @@ struct mlx5dr_definer_conv_data {
X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) \
X(SET, meter_color, rte_col_2_mlx5_col(v->color), rte_flow_item_meter_color) \
X(SET_BE32, ipsec_spi, v->hdr.spi, rte_flow_item_esp) \
- X(SET_BE32, ipsec_sequence_number, v->hdr.seq, rte_flow_item_esp)
+ X(SET_BE32, ipsec_sequence_number, v->hdr.seq, rte_flow_item_esp) \
+ X(SET, ib_l4_udp_port, UDP_ROCEV2_PORT, rte_flow_item_ib_bth) \
+ X(SET, ib_l4_opcode, v->hdr.opcode, rte_flow_item_ib_bth)
/* Item set function format */
#define X(set_type, func_name, value, item_type) \
@@ -583,6 +586,16 @@ mlx5dr_definer_mpls_label_set(struct mlx5dr_definer_fc *fc,
memcpy(tag + fc->byte_off + sizeof(v->label_tc_s), &v->ttl, sizeof(v->ttl));
}
+static void
+mlx5dr_definer_ib_l4_qp_set(struct mlx5dr_definer_fc *fc,
+ const void *item_spec,
+ uint8_t *tag)
+{
+ const struct rte_flow_item_ib_bth *v = item_spec;
+
+ memcpy(tag + fc->byte_off, &v->hdr.dst_qp, sizeof(v->hdr.dst_qp));
+}
+
static int
mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
@@ -2041,6 +2054,63 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
return 0;
}
+static int
+mlx5dr_definer_conv_item_ib_l4(struct mlx5dr_definer_conv_data *cd,
+ struct rte_flow_item *item,
+ int item_idx)
+{
+ const struct rte_flow_item_ib_bth *m = item->mask;
+ struct mlx5dr_definer_fc *fc;
+ bool inner = cd->tunnel;
+
+ /* In order to match on RoCEv2(layer4 ib), we must match
+ * on ip_protocol and l4_dport.
+ */
+ if (!cd->relaxed) {
+ fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)];
+ if (!fc->tag_set) {
+ fc->item_idx = item_idx;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ fc->tag_set = &mlx5dr_definer_udp_protocol_set;
+ DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner);
+ }
+
+ fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)];
+ if (!fc->tag_set) {
+ fc->item_idx = item_idx;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ fc->tag_set = &mlx5dr_definer_ib_l4_udp_port_set;
+ DR_CALC_SET(fc, eth_l4, destination_port, inner);
+ }
+ }
+
+ if (!m)
+ return 0;
+
+ if (m->hdr.se || m->hdr.m || m->hdr.padcnt || m->hdr.tver ||
+ m->hdr.pkey || m->hdr.f || m->hdr.b || m->hdr.rsvd0 ||
+ m->hdr.a || m->hdr.rsvd1 || !is_mem_zero(m->hdr.psn, 3)) {
+ rte_errno = ENOTSUP;
+ return rte_errno;
+ }
+
+ if (m->hdr.opcode) {
+ fc = &cd->fc[MLX5DR_DEFINER_FNAME_IB_L4_OPCODE];
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ib_l4_opcode_set;
+ DR_CALC_SET_HDR(fc, ib_l4, opcode);
+ }
+
+ if (!is_mem_zero(m->hdr.dst_qp, 3)) {
+ fc = &cd->fc[MLX5DR_DEFINER_FNAME_IB_L4_QPN];
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ib_l4_qp_set;
+ DR_CALC_SET_HDR(fc, ib_l4, qp);
+ }
+
+ return 0;
+}
+
static int
mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
struct mlx5dr_match_template *mt,
@@ -2182,6 +2252,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
item_flags |= MLX5_FLOW_LAYER_MPLS;
cd.mpls_idx++;
break;
+ case RTE_FLOW_ITEM_TYPE_IB_BTH:
+ ret = mlx5dr_definer_conv_item_ib_l4(&cd, items, i);
+ item_flags |= MLX5_FLOW_ITEM_IB_BTH;
+ break;
default:
DR_LOG(ERR, "Unsupported item type %d", items->type);
rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index 90ec4ce845..6b645f4cf0 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -134,6 +134,8 @@ enum mlx5dr_definer_fname {
MLX5DR_DEFINER_FNAME_OKS2_MPLS2_I,
MLX5DR_DEFINER_FNAME_OKS2_MPLS3_I,
MLX5DR_DEFINER_FNAME_OKS2_MPLS4_I,
+ MLX5DR_DEFINER_FNAME_IB_L4_OPCODE,
+ MLX5DR_DEFINER_FNAME_IB_L4_QPN,
MLX5DR_DEFINER_FNAME_MAX,
};
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 853c94af9c..f9e7f844ea 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4969,6 +4969,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
case RTE_FLOW_ITEM_TYPE_ESP:
case RTE_FLOW_ITEM_TYPE_FLEX:
+ case RTE_FLOW_ITEM_TYPE_IB_BTH:
break;
case RTE_FLOW_ITEM_TYPE_INTEGRITY:
/*
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match
2023-06-01 7:59 [PATCH 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
2023-06-01 7:59 ` [PATCH 1/2] net/mlx5: add support for infiniband BTH match Dong Zhou
2023-06-01 8:00 ` [PATCH 2/2] net/mlx5/hws: " Dong Zhou
@ 2023-06-06 11:07 ` Dong Zhou
2023-06-06 11:07 ` [PATCH v1 1/2] net/mlx5: add support for infiniband BTH match Dong Zhou
` (2 more replies)
2 siblings, 3 replies; 8+ messages in thread
From: Dong Zhou @ 2023-06-06 11:07 UTC (permalink / raw)
To: orika, valex, viacheslavo, thomas; +Cc: dev, rasland
mlx5 supports InfiniBand BTH item match by SWS and HWS.
depends-on: http://patches.dpdk.org/project/dpdk/patch/20230531032653.3037946-1-dongzhou@nvidia.com/ ("ethdev: add flow item for RoCE infiniband BTH")
v1:
- Update mlx5.ini and mlx5.rst doc in the first patch.
Dong Zhou (2):
net/mlx5: add support for infiniband BTH match
net/mlx5/hws: add support for infiniband BTH match
doc/guides/nics/features/mlx5.ini | 1 +
doc/guides/nics/mlx5.rst | 1 +
drivers/common/mlx5/mlx5_prm.h | 5 +-
drivers/net/mlx5/hws/mlx5dr_definer.c | 76 ++++++++++++++++++-
drivers/net/mlx5/hws/mlx5dr_definer.h | 2 +
drivers/net/mlx5/mlx5_flow.h | 6 ++
drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow_hw.c | 1 +
8 files changed, 191 insertions(+), 3 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v1 1/2] net/mlx5: add support for infiniband BTH match
2023-06-06 11:07 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
@ 2023-06-06 11:07 ` Dong Zhou
2023-06-06 11:07 ` [PATCH v1 2/2] net/mlx5/hws: " Dong Zhou
2023-06-19 12:15 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Raslan Darawsheh
2 siblings, 0 replies; 8+ messages in thread
From: Dong Zhou @ 2023-06-06 11:07 UTC (permalink / raw)
To: orika, valex, viacheslavo, thomas, Matan Azrad, Suanming Mou; +Cc: dev, rasland
This patch adds support to match opcode and dst_qp fields in
infiniband BTH. Currently, only the RoCEv2 packet is supported,
the input BTH match item is defaulted to match one RoCEv2 packet.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
doc/guides/nics/features/mlx5.ini | 1 +
doc/guides/nics/mlx5.rst | 1 +
drivers/common/mlx5/mlx5_prm.h | 5 +-
drivers/net/mlx5/mlx5_flow.h | 6 ++
drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++++++
5 files changed, 113 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 0650e02e2d..285036a5c8 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -68,6 +68,7 @@ gre_key = Y
gre_option = Y
gtp = Y
gtp_psc = Y
+ib_bth = Y
icmp = Y
icmp6 = Y
icmp6_echo_request = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 7a137d5f6a..693784e1a0 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -162,6 +162,7 @@ Features
- Sub-Function.
- Matching on represented port.
- Matching on aggregated affinity.
+- Matching on IB BTH.
Limitations
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index ed3d5efbb7..8f55fd59b3 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -932,7 +932,7 @@ struct mlx5_ifc_fte_match_set_misc_bits {
u8 gre_key_h[0x18];
u8 gre_key_l[0x8];
u8 vxlan_vni[0x18];
- u8 reserved_at_b8[0x8];
+ u8 bth_opcode[0x8];
u8 geneve_vni[0x18];
u8 lag_rx_port_affinity[0x4];
u8 reserved_at_e8[0x2];
@@ -945,7 +945,8 @@ struct mlx5_ifc_fte_match_set_misc_bits {
u8 reserved_at_120[0xa];
u8 geneve_opt_len[0x6];
u8 geneve_protocol_type[0x10];
- u8 reserved_at_140[0x20];
+ u8 reserved_at_140[0x8];
+ u8 bth_dst_qp[0x18];
u8 inner_esp_spi[0x20];
u8 outer_esp_spi[0x20];
u8 reserved_at_1a0[0x60];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 1d116ea0f6..c1d6a71708 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -227,6 +227,9 @@ enum mlx5_feature_name {
/* Aggregated affinity item */
#define MLX5_FLOW_ITEM_AGGR_AFFINITY (UINT64_C(1) << 49)
+/* IB BTH ITEM. */
+#define MLX5_FLOW_ITEM_IB_BTH (1ull << 51)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -364,6 +367,9 @@ enum mlx5_feature_name {
#define MLX5_UDP_PORT_VXLAN 4789
#define MLX5_UDP_PORT_VXLAN_GPE 4790
+/* UDP port numbers for RoCEv2. */
+#define MLX5_UDP_PORT_ROCEv2 4791
+
/* UDP port numbers for GENEVE. */
#define MLX5_UDP_PORT_GENEVE 6081
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index d14661298c..a3b72dbb5f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7193,6 +7193,65 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Validate IB BTH item.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] udp_dport
+ * UDP destination port
+ * @param[in] item
+ * Item specification.
+ * @param root
+ * Whether action is on root table.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_flow_validate_item_ib_bth(struct rte_eth_dev *dev,
+ uint16_t udp_dport,
+ const struct rte_flow_item *item,
+ bool root,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ib_bth *mask = item->mask;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_ib_bth *valid_mask;
+ int ret;
+
+ valid_mask = &rte_flow_item_ib_bth_mask;
+ if (udp_dport && udp_dport != MLX5_UDP_PORT_ROCEv2)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "protocol filtering not compatible"
+ " with UDP layer");
+ if (mask && (mask->hdr.se || mask->hdr.m || mask->hdr.padcnt ||
+ mask->hdr.tver || mask->hdr.pkey || mask->hdr.f || mask->hdr.b ||
+ mask->hdr.rsvd0 || mask->hdr.a || mask->hdr.rsvd1 ||
+ mask->hdr.psn[0] || mask->hdr.psn[1] || mask->hdr.psn[2]))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "only opcode and dst_qp are supported");
+ if (root || priv->sh->steering_format_version ==
+ MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "IB BTH item is not supported");
+ if (!mask)
+ mask = &rte_flow_item_ib_bth_mask;
+ ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ (const uint8_t *)valid_mask,
+ sizeof(struct rte_flow_item_ib_bth),
+ MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
/**
* Internal validation function. For validating both actions and items.
*
@@ -7700,6 +7759,14 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
last_item = MLX5_FLOW_ITEM_AGGR_AFFINITY;
break;
+ case RTE_FLOW_ITEM_TYPE_IB_BTH:
+ ret = mlx5_flow_validate_item_ib_bth(dev, udp_dport,
+ items, is_root, error);
+ if (ret < 0)
+ return ret;
+
+ last_item = MLX5_FLOW_ITEM_IB_BTH;
+ break;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -10971,6 +11038,37 @@ flow_dv_translate_item_aggr_affinity(void *key,
affinity_v->affinity & affinity_m->affinity);
}
+static void
+flow_dv_translate_item_ib_bth(void *key,
+ const struct rte_flow_item *item,
+ int inner, uint32_t key_type)
+{
+ const struct rte_flow_item_ib_bth *bth_m;
+ const struct rte_flow_item_ib_bth *bth_v;
+ void *headers_v, *misc_v;
+ uint16_t udp_dport;
+ char *qpn_v;
+ int i, size;
+
+ headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) :
+ MLX5_ADDR_OF(fte_match_param, key, outer_headers);
+ if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) {
+ udp_dport = key_type & MLX5_SET_MATCHER_M ?
+ 0xFFFF : MLX5_UDP_PORT_ROCEv2;
+ MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, udp_dport);
+ }
+ if (MLX5_ITEM_VALID(item, key_type))
+ return;
+ MLX5_ITEM_UPDATE(item, key_type, bth_v, bth_m, &rte_flow_item_ib_bth_mask);
+ misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+ MLX5_SET(fte_match_set_misc, misc_v, bth_opcode,
+ bth_v->hdr.opcode & bth_m->hdr.opcode);
+ qpn_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, bth_dst_qp);
+ size = sizeof(bth_m->hdr.dst_qp);
+ for (i = 0; i < size; ++i)
+ qpn_v[i] = bth_m->hdr.dst_qp[i] & bth_v->hdr.dst_qp[i];
+}
+
static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
#define HEADER_IS_ZERO(match_criteria, headers) \
@@ -13772,6 +13870,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
flow_dv_translate_item_aggr_affinity(key, items, key_type);
last_item = MLX5_FLOW_ITEM_AGGR_AFFINITY;
break;
+ case RTE_FLOW_ITEM_TYPE_IB_BTH:
+ flow_dv_translate_item_ib_bth(key, items, tunnel, key_type);
+ last_item = MLX5_FLOW_ITEM_IB_BTH;
+ break;
default:
break;
}
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v1 2/2] net/mlx5/hws: add support for infiniband BTH match
2023-06-06 11:07 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
2023-06-06 11:07 ` [PATCH v1 1/2] net/mlx5: add support for infiniband BTH match Dong Zhou
@ 2023-06-06 11:07 ` Dong Zhou
2023-06-07 6:29 ` Ori Kam
2023-06-19 12:15 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Raslan Darawsheh
2 siblings, 1 reply; 8+ messages in thread
From: Dong Zhou @ 2023-06-06 11:07 UTC (permalink / raw)
To: orika, valex, viacheslavo, thomas, Matan Azrad, Suanming Mou; +Cc: dev, rasland
This patch adds support to match opcode and dst_qp fields in
infiniband BTH. Currently, only the RoCEv2 packet is supported,
the input BTH match item is defaulted to match one RoCEv2 packet.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Acked-by: Alex Vesker <valex@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 76 ++++++++++++++++++++++++++-
drivers/net/mlx5/hws/mlx5dr_definer.h | 2 +
drivers/net/mlx5/mlx5_flow_hw.c | 1 +
3 files changed, 78 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index f92d3e8e1f..1a427c9b64 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -10,6 +10,7 @@
#define ETH_TYPE_IPV6_VXLAN 0x86DD
#define ETH_VXLAN_DEFAULT_PORT 4789
#define IP_UDP_PORT_MPLS 6635
+#define UDP_ROCEV2_PORT 4791
#define DR_FLOW_LAYER_TUNNEL_NO_MPLS (MLX5_FLOW_LAYER_TUNNEL & ~MLX5_FLOW_LAYER_MPLS)
#define STE_NO_VLAN 0x0
@@ -171,7 +172,9 @@ struct mlx5dr_definer_conv_data {
X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) \
X(SET, meter_color, rte_col_2_mlx5_col(v->color), rte_flow_item_meter_color) \
X(SET_BE32, ipsec_spi, v->hdr.spi, rte_flow_item_esp) \
- X(SET_BE32, ipsec_sequence_number, v->hdr.seq, rte_flow_item_esp)
+ X(SET_BE32, ipsec_sequence_number, v->hdr.seq, rte_flow_item_esp) \
+ X(SET, ib_l4_udp_port, UDP_ROCEV2_PORT, rte_flow_item_ib_bth) \
+ X(SET, ib_l4_opcode, v->hdr.opcode, rte_flow_item_ib_bth)
/* Item set function format */
#define X(set_type, func_name, value, item_type) \
@@ -583,6 +586,16 @@ mlx5dr_definer_mpls_label_set(struct mlx5dr_definer_fc *fc,
memcpy(tag + fc->byte_off + sizeof(v->label_tc_s), &v->ttl, sizeof(v->ttl));
}
+static void
+mlx5dr_definer_ib_l4_qp_set(struct mlx5dr_definer_fc *fc,
+ const void *item_spec,
+ uint8_t *tag)
+{
+ const struct rte_flow_item_ib_bth *v = item_spec;
+
+ memcpy(tag + fc->byte_off, &v->hdr.dst_qp, sizeof(v->hdr.dst_qp));
+}
+
static int
mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
@@ -2041,6 +2054,63 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
return 0;
}
+static int
+mlx5dr_definer_conv_item_ib_l4(struct mlx5dr_definer_conv_data *cd,
+ struct rte_flow_item *item,
+ int item_idx)
+{
+ const struct rte_flow_item_ib_bth *m = item->mask;
+ struct mlx5dr_definer_fc *fc;
+ bool inner = cd->tunnel;
+
+ /* In order to match on RoCEv2(layer4 ib), we must match
+ * on ip_protocol and l4_dport.
+ */
+ if (!cd->relaxed) {
+ fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)];
+ if (!fc->tag_set) {
+ fc->item_idx = item_idx;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ fc->tag_set = &mlx5dr_definer_udp_protocol_set;
+ DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner);
+ }
+
+ fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)];
+ if (!fc->tag_set) {
+ fc->item_idx = item_idx;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ fc->tag_set = &mlx5dr_definer_ib_l4_udp_port_set;
+ DR_CALC_SET(fc, eth_l4, destination_port, inner);
+ }
+ }
+
+ if (!m)
+ return 0;
+
+ if (m->hdr.se || m->hdr.m || m->hdr.padcnt || m->hdr.tver ||
+ m->hdr.pkey || m->hdr.f || m->hdr.b || m->hdr.rsvd0 ||
+ m->hdr.a || m->hdr.rsvd1 || !is_mem_zero(m->hdr.psn, 3)) {
+ rte_errno = ENOTSUP;
+ return rte_errno;
+ }
+
+ if (m->hdr.opcode) {
+ fc = &cd->fc[MLX5DR_DEFINER_FNAME_IB_L4_OPCODE];
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ib_l4_opcode_set;
+ DR_CALC_SET_HDR(fc, ib_l4, opcode);
+ }
+
+ if (!is_mem_zero(m->hdr.dst_qp, 3)) {
+ fc = &cd->fc[MLX5DR_DEFINER_FNAME_IB_L4_QPN];
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ib_l4_qp_set;
+ DR_CALC_SET_HDR(fc, ib_l4, qp);
+ }
+
+ return 0;
+}
+
static int
mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
struct mlx5dr_match_template *mt,
@@ -2182,6 +2252,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
item_flags |= MLX5_FLOW_LAYER_MPLS;
cd.mpls_idx++;
break;
+ case RTE_FLOW_ITEM_TYPE_IB_BTH:
+ ret = mlx5dr_definer_conv_item_ib_l4(&cd, items, i);
+ item_flags |= MLX5_FLOW_ITEM_IB_BTH;
+ break;
default:
DR_LOG(ERR, "Unsupported item type %d", items->type);
rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index 90ec4ce845..6b645f4cf0 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -134,6 +134,8 @@ enum mlx5dr_definer_fname {
MLX5DR_DEFINER_FNAME_OKS2_MPLS2_I,
MLX5DR_DEFINER_FNAME_OKS2_MPLS3_I,
MLX5DR_DEFINER_FNAME_OKS2_MPLS4_I,
+ MLX5DR_DEFINER_FNAME_IB_L4_OPCODE,
+ MLX5DR_DEFINER_FNAME_IB_L4_QPN,
MLX5DR_DEFINER_FNAME_MAX,
};
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 853c94af9c..f9e7f844ea 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4969,6 +4969,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
case RTE_FLOW_ITEM_TYPE_ESP:
case RTE_FLOW_ITEM_TYPE_FLEX:
+ case RTE_FLOW_ITEM_TYPE_IB_BTH:
break;
case RTE_FLOW_ITEM_TYPE_INTEGRITY:
/*
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH v1 2/2] net/mlx5/hws: add support for infiniband BTH match
2023-06-06 11:07 ` [PATCH v1 2/2] net/mlx5/hws: " Dong Zhou
@ 2023-06-07 6:29 ` Ori Kam
0 siblings, 0 replies; 8+ messages in thread
From: Ori Kam @ 2023-06-07 6:29 UTC (permalink / raw)
To: Bill Zhou, Alex Vesker, Slava Ovsiienko,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Matan Azrad, Suanming Mou
Cc: dev, Raslan Darawsheh
> -----Original Message-----
> From: Bill Zhou <dongzhou@nvidia.com>
> Sent: Tuesday, June 6, 2023 2:07 PM
> To: Ori Kam <orika@nvidia.com>; Alex Vesker <valex@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>; Matan Azrad <matan@nvidia.com>;
> Suanming Mou <suanmingm@nvidia.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v1 2/2] net/mlx5/hws: add support for infiniband BTH
> match
>
> This patch adds support to match opcode and dst_qp fields in
> infiniband BTH. Currently, only the RoCEv2 packet is supported,
> the input BTH match item is defaulted to match one RoCEv2 packet.
>
> Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
> Acked-by: Alex Vesker <valex@nvidia.com>
> ---
Acked-by: Ori Kam <orika@nvidia.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match
2023-06-06 11:07 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
2023-06-06 11:07 ` [PATCH v1 1/2] net/mlx5: add support for infiniband BTH match Dong Zhou
2023-06-06 11:07 ` [PATCH v1 2/2] net/mlx5/hws: " Dong Zhou
@ 2023-06-19 12:15 ` Raslan Darawsheh
2 siblings, 0 replies; 8+ messages in thread
From: Raslan Darawsheh @ 2023-06-19 12:15 UTC (permalink / raw)
To: Bill Zhou, Ori Kam, Alex Vesker, Slava Ovsiienko,
NBU-Contact-Thomas Monjalon (EXTERNAL)
Cc: dev
Hi,
> -----Original Message-----
> From: Bill Zhou <dongzhou@nvidia.com>
> Sent: Tuesday, June 6, 2023 2:07 PM
> To: Ori Kam <orika@nvidia.com>; Alex Vesker <valex@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match
>
> mlx5 supports InfiniBand BTH item match by SWS and HWS.
> depends-on:
> http://patches.dpdk.org/project/dpdk/patch/20230531032653.3037946-
> 1-dongzhou@nvidia.com/ ("ethdev: add flow item for RoCE infiniband BTH")
>
> v1:
> - Update mlx5.ini and mlx5.rst doc in the first patch.
>
> Dong Zhou (2):
> net/mlx5: add support for infiniband BTH match
> net/mlx5/hws: add support for infiniband BTH match
>
> doc/guides/nics/features/mlx5.ini | 1 +
> doc/guides/nics/mlx5.rst | 1 +
> drivers/common/mlx5/mlx5_prm.h | 5 +-
> drivers/net/mlx5/hws/mlx5dr_definer.c | 76 ++++++++++++++++++-
> drivers/net/mlx5/hws/mlx5dr_definer.h | 2 +
> drivers/net/mlx5/mlx5_flow.h | 6 ++
> drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++
> drivers/net/mlx5/mlx5_flow_hw.c | 1 +
> 8 files changed, 191 insertions(+), 3 deletions(-)
>
> --
> 2.27.0
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2023-06-19 12:15 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-01 7:59 [PATCH 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
2023-06-01 7:59 ` [PATCH 1/2] net/mlx5: add support for infiniband BTH match Dong Zhou
2023-06-01 8:00 ` [PATCH 2/2] net/mlx5/hws: " Dong Zhou
2023-06-06 11:07 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Dong Zhou
2023-06-06 11:07 ` [PATCH v1 1/2] net/mlx5: add support for infiniband BTH match Dong Zhou
2023-06-06 11:07 ` [PATCH v1 2/2] net/mlx5/hws: " Dong Zhou
2023-06-07 6:29 ` Ori Kam
2023-06-19 12:15 ` [PATCH v1 0/2] mlx5 supports InfiniBand BTH item match Raslan Darawsheh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).