* [dpdk-dev] [PATCH 0/2] net/mlx5: fix flow director mask
@ 2018-04-12 14:31 Nelio Laranjeiro
2018-04-12 14:31 ` [dpdk-dev] [PATCH 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-12 14:31 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil
Flow director mask as been mistakenly removed from mlx5 PMD. This series
brings it back.
Nelio Laranjeiro (2):
net/mlx5: split L3/L4 in flow director
net/mlx5: fix flow director mask
drivers/net/mlx5/mlx5_flow.c | 155 ++++++++++++++++-------------------
1 file changed, 69 insertions(+), 86 deletions(-)
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH 1/2] net/mlx5: split L3/L4 in flow director
2018-04-12 14:31 [dpdk-dev] [PATCH 0/2] net/mlx5: fix flow director mask Nelio Laranjeiro
@ 2018-04-12 14:31 ` Nelio Laranjeiro
2018-04-12 14:31 ` [dpdk-dev] [PATCH 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-12 14:31 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil
This will help to bring back the mask handler which was removed when this
feature was rewritten on top of rte_flow.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
drivers/net/mlx5/mlx5_flow.c | 112 ++++++++++++-----------------------
1 file changed, 37 insertions(+), 75 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 7ef68de49..7ba643b83 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2695,8 +2695,11 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
return -rte_errno;
}
attributes->queue.index = fdir_filter->action.rx_queue;
+ /* Handle L3. */
switch (fdir_filter->input.flow_type) {
case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+ case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+ case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
attributes->l3.ipv4.hdr = (struct ipv4_hdr){
.src_addr = input->flow.udp4_flow.ip.src_ip,
.dst_addr = input->flow.udp4_flow.ip.dst_ip,
@@ -2704,15 +2707,44 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
.type_of_service = input->flow.udp4_flow.ip.tos,
.next_proto_id = input->flow.udp4_flow.ip.proto,
};
- attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp4_flow.src_port,
- .dst_port = input->flow.udp4_flow.dst_port,
- };
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV4,
.spec = &attributes->l3,
.mask = &attributes->l3,
};
+ break;
+ case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+ case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+ case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
+ attributes->l3.ipv6.hdr = (struct ipv6_hdr){
+ .hop_limits = input->flow.udp6_flow.ip.hop_limits,
+ .proto = input->flow.udp6_flow.ip.proto,
+ };
+ memcpy(attributes->l3.ipv6.hdr.src_addr,
+ input->flow.udp6_flow.ip.src_ip,
+ RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
+ memcpy(attributes->l3.ipv6.hdr.dst_addr,
+ input->flow.udp6_flow.ip.dst_ip,
+ RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
+ attributes->items[1] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .spec = &attributes->l3,
+ .mask = &attributes->l3,
+ };
+ break;
+ default:
+ DRV_LOG(ERR, "port %u invalid flow type%d",
+ dev->data->port_id, fdir_filter->input.flow_type);
+ rte_errno = ENOTSUP;
+ return -rte_errno;
+ }
+ /* Handle L4. */
+ switch (fdir_filter->input.flow_type) {
+ case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+ attributes->l4.udp.hdr = (struct udp_hdr){
+ .src_port = input->flow.udp4_flow.src_port,
+ .dst_port = input->flow.udp4_flow.dst_port,
+ };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
.spec = &attributes->l4,
@@ -2720,62 +2752,21 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
- attributes->l3.ipv4.hdr = (struct ipv4_hdr){
- .src_addr = input->flow.tcp4_flow.ip.src_ip,
- .dst_addr = input->flow.tcp4_flow.ip.dst_ip,
- .time_to_live = input->flow.tcp4_flow.ip.ttl,
- .type_of_service = input->flow.tcp4_flow.ip.tos,
- .next_proto_id = input->flow.tcp4_flow.ip.proto,
- };
attributes->l4.tcp.hdr = (struct tcp_hdr){
.src_port = input->flow.tcp4_flow.src_port,
.dst_port = input->flow.tcp4_flow.dst_port,
};
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV4,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
.spec = &attributes->l4,
.mask = &attributes->l4,
};
break;
- case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
- attributes->l3.ipv4.hdr = (struct ipv4_hdr){
- .src_addr = input->flow.ip4_flow.src_ip,
- .dst_addr = input->flow.ip4_flow.dst_ip,
- .time_to_live = input->flow.ip4_flow.ttl,
- .type_of_service = input->flow.ip4_flow.tos,
- .next_proto_id = input->flow.ip4_flow.proto,
- };
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV4,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
- break;
case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
- attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.udp6_flow.ip.hop_limits,
- .proto = input->flow.udp6_flow.ip.proto,
- };
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.udp6_flow.ip.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.udp6_flow.ip.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
attributes->l4.udp.hdr = (struct udp_hdr){
.src_port = input->flow.udp6_flow.src_port,
.dst_port = input->flow.udp6_flow.dst_port,
};
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV6,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
.spec = &attributes->l4,
@@ -2783,47 +2774,18 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
- attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.tcp6_flow.ip.hop_limits,
- .proto = input->flow.tcp6_flow.ip.proto,
- };
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.tcp6_flow.ip.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.tcp6_flow.ip.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
attributes->l4.tcp.hdr = (struct tcp_hdr){
.src_port = input->flow.tcp6_flow.src_port,
.dst_port = input->flow.tcp6_flow.dst_port,
};
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV6,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
.spec = &attributes->l4,
.mask = &attributes->l4,
};
break;
+ case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
- attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.ipv6_flow.hop_limits,
- .proto = input->flow.ipv6_flow.proto,
- };
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.ipv6_flow.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.ipv6_flow.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV6,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
break;
default:
DRV_LOG(ERR, "port %u invalid flow type%d",
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH 2/2] net/mlx5: fix flow director mask
2018-04-12 14:31 [dpdk-dev] [PATCH 0/2] net/mlx5: fix flow director mask Nelio Laranjeiro
2018-04-12 14:31 ` [dpdk-dev] [PATCH 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
@ 2018-04-12 14:31 ` Nelio Laranjeiro
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 0/2] " Nelio Laranjeiro
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-12 14:31 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil, stable
During the transition to resurrect flow director on top of rte_flow, mask
handling was removed by mistake.
Fixes: 4c3e9bcdd52e ("net/mlx5: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
drivers/net/mlx5/mlx5_flow.c | 59 ++++++++++++++++++++++++------------
1 file changed, 40 insertions(+), 19 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 7ba643b83..5e75afa7f 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2661,6 +2661,9 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
{
struct priv *priv = dev->data->dev_private;
const struct rte_eth_fdir_input *input = &fdir_filter->input;
+ const struct rte_eth_fdir_masks *mask =
+ &dev->data->dev_conf.fdir_conf.mask;
+ unsigned int i;
/* Validate queue number. */
if (fdir_filter->action.rx_queue >= priv->rxqs_n) {
@@ -2701,11 +2704,16 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
attributes->l3.ipv4.hdr = (struct ipv4_hdr){
- .src_addr = input->flow.udp4_flow.ip.src_ip,
- .dst_addr = input->flow.udp4_flow.ip.dst_ip,
- .time_to_live = input->flow.udp4_flow.ip.ttl,
- .type_of_service = input->flow.udp4_flow.ip.tos,
- .next_proto_id = input->flow.udp4_flow.ip.proto,
+ .src_addr = input->flow.udp4_flow.ip.src_ip &
+ mask->ipv4_mask.src_ip,
+ .dst_addr = input->flow.udp4_flow.ip.dst_ip &
+ mask->ipv4_mask.dst_ip,
+ .time_to_live = input->flow.udp4_flow.ip.ttl &
+ mask->ipv4_mask.ttl,
+ .type_of_service = input->flow.udp4_flow.ip.tos &
+ mask->ipv4_mask.ttl,
+ .next_proto_id = input->flow.udp4_flow.ip.proto &
+ mask->ipv4_mask.proto,
};
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV4,
@@ -2720,12 +2728,17 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
.hop_limits = input->flow.udp6_flow.ip.hop_limits,
.proto = input->flow.udp6_flow.ip.proto,
};
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.udp6_flow.ip.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.udp6_flow.ip.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
+
+ for (i = 0;
+ i != RTE_DIM(attributes->l3.ipv6.hdr.src_addr);
+ ++i) {
+ attributes->l3.ipv6.hdr.src_addr[i] =
+ input->flow.udp6_flow.ip.src_ip[i] &
+ mask->ipv6_mask.src_ip[i];
+ attributes->l3.ipv6.hdr.dst_addr[i] =
+ input->flow.udp6_flow.ip.dst_ip[i] &
+ mask->ipv6_mask.dst_ip[i];
+ }
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV6,
.spec = &attributes->l3,
@@ -2742,8 +2755,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
switch (fdir_filter->input.flow_type) {
case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp4_flow.src_port,
- .dst_port = input->flow.udp4_flow.dst_port,
+ .src_port = input->flow.udp4_flow.src_port &
+ mask->src_port_mask,
+ .dst_port = input->flow.udp4_flow.dst_port &
+ mask->dst_port_mask,
};
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
@@ -2753,8 +2768,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
break;
case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
attributes->l4.tcp.hdr = (struct tcp_hdr){
- .src_port = input->flow.tcp4_flow.src_port,
- .dst_port = input->flow.tcp4_flow.dst_port,
+ .src_port = input->flow.tcp4_flow.src_port &
+ mask->src_port_mask,
+ .dst_port = input->flow.tcp4_flow.dst_port &
+ mask->dst_port_mask,
};
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
@@ -2764,8 +2781,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp6_flow.src_port,
- .dst_port = input->flow.udp6_flow.dst_port,
+ .src_port = input->flow.udp6_flow.src_port &
+ mask->src_port_mask,
+ .dst_port = input->flow.udp6_flow.dst_port &
+ mask->dst_port_mask,
};
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
@@ -2775,8 +2794,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
attributes->l4.tcp.hdr = (struct tcp_hdr){
- .src_port = input->flow.tcp6_flow.src_port,
- .dst_port = input->flow.tcp6_flow.dst_port,
+ .src_port = input->flow.tcp6_flow.src_port &
+ mask->src_port_mask,
+ .dst_port = input->flow.tcp6_flow.dst_port &
+ mask->dst_port_mask,
};
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v2 0/2] net/mlx5: fix flow director mask
2018-04-12 14:31 [dpdk-dev] [PATCH 0/2] net/mlx5: fix flow director mask Nelio Laranjeiro
2018-04-12 14:31 ` [dpdk-dev] [PATCH 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
2018-04-12 14:31 ` [dpdk-dev] [PATCH 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
@ 2018-04-13 15:28 ` Nelio Laranjeiro
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
` (2 more replies)
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
4 siblings, 3 replies; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-13 15:28 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil
Flow director mask as been mistakenly removed from mlx5 PMD. This series
brings it back.
Changes in v2:
Use the L3 structures instead of the l4 in the conversion.
Nelio Laranjeiro (2):
net/mlx5: split L3/L4 in flow director
net/mlx5: fix flow director mask
drivers/net/mlx5/mlx5_flow.c | 155 ++++++++++++++++-------------------
1 file changed, 69 insertions(+), 86 deletions(-)
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v2 1/2] net/mlx5: split L3/L4 in flow director
2018-04-12 14:31 [dpdk-dev] [PATCH 0/2] net/mlx5: fix flow director mask Nelio Laranjeiro
` (2 preceding siblings ...)
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 0/2] " Nelio Laranjeiro
@ 2018-04-13 15:28 ` Nelio Laranjeiro
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
4 siblings, 0 replies; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-13 15:28 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil
This will help to bring back the mask handler which was removed when this
feature was rewritten on top of rte_flow.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
drivers/net/mlx5/mlx5_flow.c | 122 ++++++++++++-----------------------
1 file changed, 42 insertions(+), 80 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 7ef68de49..acaa5f318 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2695,53 +2695,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
return -rte_errno;
}
attributes->queue.index = fdir_filter->action.rx_queue;
+ /* Handle L3. */
switch (fdir_filter->input.flow_type) {
case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
- attributes->l3.ipv4.hdr = (struct ipv4_hdr){
- .src_addr = input->flow.udp4_flow.ip.src_ip,
- .dst_addr = input->flow.udp4_flow.ip.dst_ip,
- .time_to_live = input->flow.udp4_flow.ip.ttl,
- .type_of_service = input->flow.udp4_flow.ip.tos,
- .next_proto_id = input->flow.udp4_flow.ip.proto,
- };
- attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp4_flow.src_port,
- .dst_port = input->flow.udp4_flow.dst_port,
- };
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV4,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
- attributes->items[2] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_UDP,
- .spec = &attributes->l4,
- .mask = &attributes->l4,
- };
- break;
case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
- attributes->l3.ipv4.hdr = (struct ipv4_hdr){
- .src_addr = input->flow.tcp4_flow.ip.src_ip,
- .dst_addr = input->flow.tcp4_flow.ip.dst_ip,
- .time_to_live = input->flow.tcp4_flow.ip.ttl,
- .type_of_service = input->flow.tcp4_flow.ip.tos,
- .next_proto_id = input->flow.tcp4_flow.ip.proto,
- };
- attributes->l4.tcp.hdr = (struct tcp_hdr){
- .src_port = input->flow.tcp4_flow.src_port,
- .dst_port = input->flow.tcp4_flow.dst_port,
- };
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV4,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
- attributes->items[2] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_TCP,
- .spec = &attributes->l4,
- .mask = &attributes->l4,
- };
- break;
case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
attributes->l3.ipv4.hdr = (struct ipv4_hdr){
.src_addr = input->flow.ip4_flow.src_ip,
@@ -2757,73 +2714,78 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+ case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+ case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
attributes->l3.ipv6.hdr = (struct ipv6_hdr){
.hop_limits = input->flow.udp6_flow.ip.hop_limits,
.proto = input->flow.udp6_flow.ip.proto,
};
memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.udp6_flow.ip.src_ip,
+ input->flow.ipv6_flow.src_ip,
RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.udp6_flow.ip.dst_ip,
+ input->flow.ipv6_flow.dst_ip,
RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp6_flow.src_port,
- .dst_port = input->flow.udp6_flow.dst_port,
- };
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV6,
.spec = &attributes->l3,
.mask = &attributes->l3,
};
+ break;
+ default:
+ DRV_LOG(ERR, "port %u invalid flow type%d",
+ dev->data->port_id, fdir_filter->input.flow_type);
+ rte_errno = ENOTSUP;
+ return -rte_errno;
+ }
+ /* Handle L4. */
+ switch (fdir_filter->input.flow_type) {
+ case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+ attributes->l4.udp.hdr = (struct udp_hdr){
+ .src_port = input->flow.udp4_flow.src_port,
+ .dst_port = input->flow.udp4_flow.dst_port,
+ };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
.spec = &attributes->l4,
.mask = &attributes->l4,
};
break;
- case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
- attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.tcp6_flow.ip.hop_limits,
- .proto = input->flow.tcp6_flow.ip.proto,
+ case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+ attributes->l4.tcp.hdr = (struct tcp_hdr){
+ .src_port = input->flow.tcp4_flow.src_port,
+ .dst_port = input->flow.tcp4_flow.dst_port,
};
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.tcp6_flow.ip.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.tcp6_flow.ip.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
+ attributes->items[2] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_TCP,
+ .spec = &attributes->l4,
+ .mask = &attributes->l4,
+ };
+ break;
+ case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+ attributes->l4.udp.hdr = (struct udp_hdr){
+ .src_port = input->flow.udp6_flow.src_port,
+ .dst_port = input->flow.udp6_flow.dst_port,
+ };
+ attributes->items[2] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .spec = &attributes->l4,
+ .mask = &attributes->l4,
+ };
+ break;
+ case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
attributes->l4.tcp.hdr = (struct tcp_hdr){
.src_port = input->flow.tcp6_flow.src_port,
.dst_port = input->flow.tcp6_flow.dst_port,
};
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV6,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
.spec = &attributes->l4,
.mask = &attributes->l4,
};
break;
+ case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
- attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.ipv6_flow.hop_limits,
- .proto = input->flow.ipv6_flow.proto,
- };
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.ipv6_flow.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.ipv6_flow.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV6,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
break;
default:
DRV_LOG(ERR, "port %u invalid flow type%d",
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix flow director mask
2018-04-12 14:31 [dpdk-dev] [PATCH 0/2] net/mlx5: fix flow director mask Nelio Laranjeiro
` (3 preceding siblings ...)
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
@ 2018-04-13 15:28 ` Nelio Laranjeiro
4 siblings, 0 replies; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-13 15:28 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil, stable
During the transition to resurrect flow director on top of rte_flow, mask
handling was removed by mistake.
Fixes: 4c3e9bcdd52e ("net/mlx5: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
drivers/net/mlx5/mlx5_flow.c | 63 ++++++++++++++++++++++++------------
1 file changed, 42 insertions(+), 21 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index acaa5f318..8f3234084 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2661,6 +2661,9 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
{
struct priv *priv = dev->data->dev_private;
const struct rte_eth_fdir_input *input = &fdir_filter->input;
+ const struct rte_eth_fdir_masks *mask =
+ &dev->data->dev_conf.fdir_conf.mask;
+ unsigned int i;
/* Validate queue number. */
if (fdir_filter->action.rx_queue >= priv->rxqs_n) {
@@ -2701,11 +2704,16 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
attributes->l3.ipv4.hdr = (struct ipv4_hdr){
- .src_addr = input->flow.ip4_flow.src_ip,
- .dst_addr = input->flow.ip4_flow.dst_ip,
- .time_to_live = input->flow.ip4_flow.ttl,
- .type_of_service = input->flow.ip4_flow.tos,
- .next_proto_id = input->flow.ip4_flow.proto,
+ .src_addr = input->flow.ip4_flow.src_ip &
+ mask->ipv4_mask.src_ip,
+ .dst_addr = input->flow.ip4_flow.dst_ip &
+ mask->ipv4_mask.dst_ip,
+ .time_to_live = input->flow.ip4_flow.ttl &
+ mask->ipv4_mask.ttl,
+ .type_of_service = input->flow.ip4_flow.tos &
+ mask->ipv4_mask.ttl,
+ .next_proto_id = input->flow.ip4_flow.proto &
+ mask->ipv4_mask.proto,
};
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV4,
@@ -2717,15 +2725,20 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.udp6_flow.ip.hop_limits,
- .proto = input->flow.udp6_flow.ip.proto,
+ .hop_limits = input->flow.ipv6_flow.hop_limits,
+ .proto = input->flow.ipv6_flow.proto,
};
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.ipv6_flow.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.ipv6_flow.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
+
+ for (i = 0;
+ i != RTE_DIM(attributes->l3.ipv6.hdr.src_addr);
+ ++i) {
+ attributes->l3.ipv6.hdr.src_addr[i] =
+ input->flow.ipv6_flow.src_ip[i] &
+ mask->ipv6_mask.src_ip[i];
+ attributes->l3.ipv6.hdr.dst_addr[i] =
+ input->flow.ipv6_flow.dst_ip[i] &
+ mask->ipv6_mask.dst_ip[i];
+ }
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV6,
.spec = &attributes->l3,
@@ -2742,8 +2755,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
switch (fdir_filter->input.flow_type) {
case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp4_flow.src_port,
- .dst_port = input->flow.udp4_flow.dst_port,
+ .src_port = input->flow.udp4_flow.src_port &
+ mask->src_port_mask,
+ .dst_port = input->flow.udp4_flow.dst_port &
+ mask->dst_port_mask,
};
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
@@ -2753,8 +2768,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
break;
case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
attributes->l4.tcp.hdr = (struct tcp_hdr){
- .src_port = input->flow.tcp4_flow.src_port,
- .dst_port = input->flow.tcp4_flow.dst_port,
+ .src_port = input->flow.tcp4_flow.src_port &
+ mask->src_port_mask,
+ .dst_port = input->flow.tcp4_flow.dst_port &
+ mask->dst_port_mask,
};
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
@@ -2764,8 +2781,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp6_flow.src_port,
- .dst_port = input->flow.udp6_flow.dst_port,
+ .src_port = input->flow.udp6_flow.src_port &
+ mask->src_port_mask,
+ .dst_port = input->flow.udp6_flow.dst_port &
+ mask->dst_port_mask,
};
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
@@ -2775,8 +2794,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
attributes->l4.tcp.hdr = (struct tcp_hdr){
- .src_port = input->flow.tcp6_flow.src_port,
- .dst_port = input->flow.tcp6_flow.dst_port,
+ .src_port = input->flow.tcp6_flow.src_port &
+ mask->src_port_mask,
+ .dst_port = input->flow.tcp6_flow.dst_port &
+ mask->dst_port_mask,
};
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v3 0/2] net/mlx5: fix flow director mask
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 0/2] " Nelio Laranjeiro
@ 2018-04-17 9:01 ` Nelio Laranjeiro
2018-04-23 5:39 ` Shahaf Shuler
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
2 siblings, 1 reply; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-17 9:01 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil
Flow director mask as been mistakenly removed from mlx5 PMD. This series
brings it back.
Changes in v3:
Let flow API handle the mask from the user.
Changes in v2:
Use the L3 structures instead of the l4 in the conversion.
Nelio Laranjeiro (2):
net/mlx5: split L3/L4 in flow director
net/mlx5: fix flow director mask
drivers/net/mlx5/mlx5_flow.c | 164 ++++++++++++++++++-----------------
1 file changed, 83 insertions(+), 81 deletions(-)
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v3 1/2] net/mlx5: split L3/L4 in flow director
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 0/2] " Nelio Laranjeiro
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
@ 2018-04-17 9:01 ` Nelio Laranjeiro
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
2 siblings, 0 replies; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-17 9:01 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil
This will help to bring back the mask handler which was removed when this
feature was rewritten on top of rte_flow.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
drivers/net/mlx5/mlx5_flow.c | 122 ++++++++++++-----------------------
1 file changed, 42 insertions(+), 80 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 7ef68de49..acaa5f318 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2695,53 +2695,10 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
return -rte_errno;
}
attributes->queue.index = fdir_filter->action.rx_queue;
+ /* Handle L3. */
switch (fdir_filter->input.flow_type) {
case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
- attributes->l3.ipv4.hdr = (struct ipv4_hdr){
- .src_addr = input->flow.udp4_flow.ip.src_ip,
- .dst_addr = input->flow.udp4_flow.ip.dst_ip,
- .time_to_live = input->flow.udp4_flow.ip.ttl,
- .type_of_service = input->flow.udp4_flow.ip.tos,
- .next_proto_id = input->flow.udp4_flow.ip.proto,
- };
- attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp4_flow.src_port,
- .dst_port = input->flow.udp4_flow.dst_port,
- };
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV4,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
- attributes->items[2] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_UDP,
- .spec = &attributes->l4,
- .mask = &attributes->l4,
- };
- break;
case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
- attributes->l3.ipv4.hdr = (struct ipv4_hdr){
- .src_addr = input->flow.tcp4_flow.ip.src_ip,
- .dst_addr = input->flow.tcp4_flow.ip.dst_ip,
- .time_to_live = input->flow.tcp4_flow.ip.ttl,
- .type_of_service = input->flow.tcp4_flow.ip.tos,
- .next_proto_id = input->flow.tcp4_flow.ip.proto,
- };
- attributes->l4.tcp.hdr = (struct tcp_hdr){
- .src_port = input->flow.tcp4_flow.src_port,
- .dst_port = input->flow.tcp4_flow.dst_port,
- };
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV4,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
- attributes->items[2] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_TCP,
- .spec = &attributes->l4,
- .mask = &attributes->l4,
- };
- break;
case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
attributes->l3.ipv4.hdr = (struct ipv4_hdr){
.src_addr = input->flow.ip4_flow.src_ip,
@@ -2757,73 +2714,78 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+ case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
+ case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
attributes->l3.ipv6.hdr = (struct ipv6_hdr){
.hop_limits = input->flow.udp6_flow.ip.hop_limits,
.proto = input->flow.udp6_flow.ip.proto,
};
memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.udp6_flow.ip.src_ip,
+ input->flow.ipv6_flow.src_ip,
RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.udp6_flow.ip.dst_ip,
+ input->flow.ipv6_flow.dst_ip,
RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- attributes->l4.udp.hdr = (struct udp_hdr){
- .src_port = input->flow.udp6_flow.src_port,
- .dst_port = input->flow.udp6_flow.dst_port,
- };
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV6,
.spec = &attributes->l3,
.mask = &attributes->l3,
};
+ break;
+ default:
+ DRV_LOG(ERR, "port %u invalid flow type%d",
+ dev->data->port_id, fdir_filter->input.flow_type);
+ rte_errno = ENOTSUP;
+ return -rte_errno;
+ }
+ /* Handle L4. */
+ switch (fdir_filter->input.flow_type) {
+ case RTE_ETH_FLOW_NONFRAG_IPV4_UDP:
+ attributes->l4.udp.hdr = (struct udp_hdr){
+ .src_port = input->flow.udp4_flow.src_port,
+ .dst_port = input->flow.udp4_flow.dst_port,
+ };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
.spec = &attributes->l4,
.mask = &attributes->l4,
};
break;
- case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
- attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.tcp6_flow.ip.hop_limits,
- .proto = input->flow.tcp6_flow.ip.proto,
+ case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
+ attributes->l4.tcp.hdr = (struct tcp_hdr){
+ .src_port = input->flow.tcp4_flow.src_port,
+ .dst_port = input->flow.tcp4_flow.dst_port,
};
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.tcp6_flow.ip.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.tcp6_flow.ip.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
+ attributes->items[2] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_TCP,
+ .spec = &attributes->l4,
+ .mask = &attributes->l4,
+ };
+ break;
+ case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
+ attributes->l4.udp.hdr = (struct udp_hdr){
+ .src_port = input->flow.udp6_flow.src_port,
+ .dst_port = input->flow.udp6_flow.dst_port,
+ };
+ attributes->items[2] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .spec = &attributes->l4,
+ .mask = &attributes->l4,
+ };
+ break;
+ case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
attributes->l4.tcp.hdr = (struct tcp_hdr){
.src_port = input->flow.tcp6_flow.src_port,
.dst_port = input->flow.tcp6_flow.dst_port,
};
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV6,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
.spec = &attributes->l4,
.mask = &attributes->l4,
};
break;
+ case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
- attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.ipv6_flow.hop_limits,
- .proto = input->flow.ipv6_flow.proto,
- };
- memcpy(attributes->l3.ipv6.hdr.src_addr,
- input->flow.ipv6_flow.src_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- memcpy(attributes->l3.ipv6.hdr.dst_addr,
- input->flow.ipv6_flow.dst_ip,
- RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
- attributes->items[1] = (struct rte_flow_item){
- .type = RTE_FLOW_ITEM_TYPE_IPV6,
- .spec = &attributes->l3,
- .mask = &attributes->l3,
- };
break;
default:
DRV_LOG(ERR, "port %u invalid flow type%d",
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v3 2/2] net/mlx5: fix flow director mask
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 0/2] " Nelio Laranjeiro
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
@ 2018-04-17 9:01 ` Nelio Laranjeiro
2 siblings, 0 replies; 10+ messages in thread
From: Nelio Laranjeiro @ 2018-04-17 9:01 UTC (permalink / raw)
To: dev, Yongseok Koh; +Cc: Adrien Mazarguil, stable
During the transition to resurrect flow director on top of rte_flow, mask
handling was removed by mistake.
Fixes: 4c3e9bcdd52e ("net/mlx5: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
drivers/net/mlx5/mlx5_flow.c | 56 ++++++++++++++++++++++++++++++------
1 file changed, 48 insertions(+), 8 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index acaa5f318..7e3bdcc66 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -442,10 +442,18 @@ struct mlx5_fdir {
struct rte_flow_item_ipv4 ipv4;
struct rte_flow_item_ipv6 ipv6;
} l3;
+ union {
+ struct rte_flow_item_ipv4 ipv4;
+ struct rte_flow_item_ipv6 ipv6;
+ } l3_mask;
union {
struct rte_flow_item_udp udp;
struct rte_flow_item_tcp tcp;
} l4;
+ union {
+ struct rte_flow_item_udp udp;
+ struct rte_flow_item_tcp tcp;
+ } l4_mask;
struct rte_flow_action_queue queue;
};
@@ -2661,6 +2669,8 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
{
struct priv *priv = dev->data->dev_private;
const struct rte_eth_fdir_input *input = &fdir_filter->input;
+ const struct rte_eth_fdir_masks *mask =
+ &dev->data->dev_conf.fdir_conf.mask;
/* Validate queue number. */
if (fdir_filter->action.rx_queue >= priv->rxqs_n) {
@@ -2707,29 +2717,43 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
.type_of_service = input->flow.ip4_flow.tos,
.next_proto_id = input->flow.ip4_flow.proto,
};
+ attributes->l3_mask.ipv4.hdr = (struct ipv4_hdr){
+ .src_addr = mask->ipv4_mask.src_ip,
+ .dst_addr = mask->ipv4_mask.dst_ip,
+ .time_to_live = mask->ipv4_mask.ttl,
+ .type_of_service = mask->ipv4_mask.tos,
+ .next_proto_id = mask->ipv4_mask.proto,
+ };
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV4,
.spec = &attributes->l3,
- .mask = &attributes->l3,
+ .mask = &attributes->l3_mask,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER:
attributes->l3.ipv6.hdr = (struct ipv6_hdr){
- .hop_limits = input->flow.udp6_flow.ip.hop_limits,
- .proto = input->flow.udp6_flow.ip.proto,
+ .hop_limits = input->flow.ipv6_flow.hop_limits,
+ .proto = input->flow.ipv6_flow.proto,
};
+
memcpy(attributes->l3.ipv6.hdr.src_addr,
input->flow.ipv6_flow.src_ip,
RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
memcpy(attributes->l3.ipv6.hdr.dst_addr,
input->flow.ipv6_flow.dst_ip,
RTE_DIM(attributes->l3.ipv6.hdr.src_addr));
+ memcpy(attributes->l3_mask.ipv6.hdr.src_addr,
+ mask->ipv6_mask.src_ip,
+ RTE_DIM(attributes->l3_mask.ipv6.hdr.src_addr));
+ memcpy(attributes->l3_mask.ipv6.hdr.dst_addr,
+ mask->ipv6_mask.dst_ip,
+ RTE_DIM(attributes->l3_mask.ipv6.hdr.src_addr));
attributes->items[1] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_IPV6,
.spec = &attributes->l3,
- .mask = &attributes->l3,
+ .mask = &attributes->l3_mask,
};
break;
default:
@@ -2745,10 +2769,14 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
.src_port = input->flow.udp4_flow.src_port,
.dst_port = input->flow.udp4_flow.dst_port,
};
+ attributes->l4_mask.udp.hdr = (struct udp_hdr){
+ .src_port = mask->src_port_mask,
+ .dst_port = mask->dst_port_mask,
+ };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
.spec = &attributes->l4,
- .mask = &attributes->l4,
+ .mask = &attributes->l4_mask,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV4_TCP:
@@ -2756,10 +2784,14 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
.src_port = input->flow.tcp4_flow.src_port,
.dst_port = input->flow.tcp4_flow.dst_port,
};
+ attributes->l4_mask.tcp.hdr = (struct tcp_hdr){
+ .src_port = mask->src_port_mask,
+ .dst_port = mask->dst_port_mask,
+ };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
.spec = &attributes->l4,
- .mask = &attributes->l4,
+ .mask = &attributes->l4_mask,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_UDP:
@@ -2767,10 +2799,14 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
.src_port = input->flow.udp6_flow.src_port,
.dst_port = input->flow.udp6_flow.dst_port,
};
+ attributes->l4_mask.udp.hdr = (struct udp_hdr){
+ .src_port = mask->src_port_mask,
+ .dst_port = mask->dst_port_mask,
+ };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_UDP,
.spec = &attributes->l4,
- .mask = &attributes->l4,
+ .mask = &attributes->l4_mask,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV6_TCP:
@@ -2778,10 +2814,14 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev,
.src_port = input->flow.tcp6_flow.src_port,
.dst_port = input->flow.tcp6_flow.dst_port,
};
+ attributes->l4_mask.tcp.hdr = (struct tcp_hdr){
+ .src_port = mask->src_port_mask,
+ .dst_port = mask->dst_port_mask,
+ };
attributes->items[2] = (struct rte_flow_item){
.type = RTE_FLOW_ITEM_TYPE_TCP,
.spec = &attributes->l4,
- .mask = &attributes->l4,
+ .mask = &attributes->l4_mask,
};
break;
case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER:
--
2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] net/mlx5: fix flow director mask
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
@ 2018-04-23 5:39 ` Shahaf Shuler
0 siblings, 0 replies; 10+ messages in thread
From: Shahaf Shuler @ 2018-04-23 5:39 UTC (permalink / raw)
To: Nélio Laranjeiro, dev, Yongseok Koh; +Cc: Adrien Mazarguil
Tuesday, April 17, 2018 12:02 PM, Nelio Laranjeiro:
> Subject: [dpdk-dev] [PATCH v3 0/2] net/mlx5: fix flow director mask
>
> Flow director mask as been mistakenly removed from mlx5 PMD. This series
> brings it back.
>
> Changes in v3:
>
> Let flow API handle the mask from the user.
>
> Changes in v2:
>
> Use the L3 structures instead of the l4 in the conversion.
>
> Nelio Laranjeiro (2):
> net/mlx5: split L3/L4 in flow director
> net/mlx5: fix flow director mask
>
> drivers/net/mlx5/mlx5_flow.c | 164 ++++++++++++++++++-----------------
> 1 file changed, 83 insertions(+), 81 deletions(-)
Series applied to next-net-mlx, thanks.
>
> --
> 2.17.0
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-04-23 5:39 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-12 14:31 [dpdk-dev] [PATCH 0/2] net/mlx5: fix flow director mask Nelio Laranjeiro
2018-04-12 14:31 ` [dpdk-dev] [PATCH 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
2018-04-12 14:31 ` [dpdk-dev] [PATCH 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 0/2] " Nelio Laranjeiro
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
2018-04-23 5:39 ` Shahaf Shuler
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
2018-04-17 9:01 ` [dpdk-dev] [PATCH v3 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: split L3/L4 in flow director Nelio Laranjeiro
2018-04-13 15:28 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix flow director mask Nelio Laranjeiro
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).