* [PATCH] net/mlx5/hws: fix return value of send queue action @ 2023-03-23 12:34 Alex Vesker 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix send sync drain empty queue check Alex Vesker ` (2 more replies) 0 siblings, 3 replies; 6+ messages in thread From: Alex Vesker @ 2023-03-23 12:34 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, stable The rte_errno should be set to a positive error value while the ret of the function should return a negative value this aligns code to other mlx5dr API functions. Fixes: 3eb748869d2d ("net/mlx5/hws: add send layer") Cc: stable@dpdk.org Signed-off-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Matan Azrad matan@nvidia.com --- drivers/net/mlx5/hws/mlx5dr_send.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index 51aaf5c8e2..d650c55124 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -1007,8 +1007,8 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, break; default: - rte_errno = -EINVAL; - return rte_errno; + rte_errno = EINVAL; + return -rte_errno; } return 0; -- 2.18.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] net/mlx5/hws: fix send sync drain empty queue check 2023-03-23 12:34 [PATCH] net/mlx5/hws: fix return value of send queue action Alex Vesker @ 2023-03-23 12:34 ` Alex Vesker 2023-03-23 19:42 ` Raslan Darawsheh 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix IPv4 frag matching Alex Vesker 2023-03-23 19:41 ` [PATCH] net/mlx5/hws: fix return value of send queue action Raslan Darawsheh 2 siblings, 1 reply; 6+ messages in thread From: Alex Vesker @ 2023-03-23 12:34 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika The function that checks if the queue is empty used on queue action for SYNC and ASYNC drain didn't function correctly since cur_post is a free running value and not cyclic. The fix is bitwise AND cur_post to get the real value. Fixes: 90488887ee33 ("net/mlx5/hws: support synchronous drain") Signed-off-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Matan Azrad matan@nvidia.com --- drivers/net/mlx5/hws/mlx5dr_send.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index d0977ec851..c1e8616f7e 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -255,7 +255,10 @@ void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); static inline bool mlx5dr_send_engine_empty(struct mlx5dr_send_engine *queue) { - return (queue->send_ring->send_sq.cur_post == queue->send_ring->send_cq.poll_wqe); + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + struct mlx5dr_send_ring_cq *send_cq = &queue->send_ring->send_cq; + + return ((send_sq->cur_post & send_sq->buf_mask) == send_cq->poll_wqe); } static inline bool mlx5dr_send_engine_full(struct mlx5dr_send_engine *queue) -- 2.18.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH] net/mlx5/hws: fix send sync drain empty queue check 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix send sync drain empty queue check Alex Vesker @ 2023-03-23 19:42 ` Raslan Darawsheh 0 siblings, 0 replies; 6+ messages in thread From: Raslan Darawsheh @ 2023-03-23 19:42 UTC (permalink / raw) To: Alex Vesker, Alex Vesker, Slava Ovsiienko, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam Hi, > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, March 23, 2023 2:34 PM > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan > Azrad <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com> > Subject: [PATCH] net/mlx5/hws: fix send sync drain empty queue check > > The function that checks if the queue is empty used on queue action for SYNC > and ASYNC drain didn't function correctly since cur_post is a free running value > and not cyclic. > The fix is bitwise AND cur_post to get the real value. > > Fixes: 90488887ee33 ("net/mlx5/hws: support synchronous drain") > Signed-off-by: Alex Vesker <valex@nvidia.com> > Reviewed-by: Erez Shitrit <erezsh@nvidia.com> > Acked-by: Matan Azrad matan@nvidia.com Fixed Acked-by tag, Patch applied to next-net-mlx, Kindest regards Raslan Darawsheh ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] net/mlx5/hws: fix IPv4 frag matching 2023-03-23 12:34 [PATCH] net/mlx5/hws: fix return value of send queue action Alex Vesker 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix send sync drain empty queue check Alex Vesker @ 2023-03-23 12:34 ` Alex Vesker 2023-03-23 19:43 ` Raslan Darawsheh 2023-03-23 19:41 ` [PATCH] net/mlx5/hws: fix return value of send queue action Raslan Darawsheh 2 siblings, 1 reply; 6+ messages in thread From: Alex Vesker @ 2023-03-23 12:34 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Hamdan Igbaria, stable From: Hamdan Igbaria <hamdani@nvidia.com> Fix IPv4 frag matching in case fragment_offset field is set to non zero value in the mask. fragment_offset value is converted using the following logic: -In case fragment_offset value was set to 0x3fff, then we will match only on ip_fragmented bit. -Otherwise we will match fragment_offset based on spec and last same as any other field. Fixes: c55c2bf35333 ("net/mlx5/hws: add definer layer") Cc: stable@dpdk.org Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Acked-by: Matan Azrad matan@nvidia.com --- drivers/net/mlx5/hws/mlx5dr_definer.c | 11 +++++++++-- drivers/net/mlx5/hws/mlx5dr_definer.h | 9 +++++++-- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 379f4065e7..f92d3e8e1f 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -127,6 +127,7 @@ struct mlx5dr_definer_conv_data { X(SET, ipv4_version, STE_IPV4, rte_ipv4_hdr) \ X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ X(SET_BE16, ipv4_len, v->total_length, rte_ipv4_hdr) \ + X(SET, ip_fragmented, !!v->fragment_offset, rte_ipv4_hdr) \ X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ X(SET, ipv6_routing_hdr, IPPROTO_ROUTING, rte_flow_item_ipv6) \ @@ -735,8 +736,14 @@ mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, if (m->fragment_offset) { fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; fc->item_idx = item_idx; - fc->tag_set = &mlx5dr_definer_ipv4_frag_set; - DR_CALC_SET(fc, eth_l3, fragment_offset, inner); + if (rte_be_to_cpu_16(m->fragment_offset) == 0x3fff) { + fc->tag_set = &mlx5dr_definer_ip_fragmented_set; + DR_CALC_SET(fc, eth_l2, ip_fragmented, inner); + } else { + fc->is_range = l && l->fragment_offset; + fc->tag_set = &mlx5dr_definer_ipv4_frag_set; + DR_CALC_SET(fc, eth_l3, ipv4_frag, inner); + } } if (m->next_proto_id) { diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index 0cd83db756..90ec4ce845 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -226,8 +226,13 @@ struct mlx5_ifc_definer_hl_eth_l3_bits { u8 time_to_live_hop_limit[0x8]; u8 protocol_next_header[0x8]; u8 identification[0x10]; - u8 flags[0x3]; - u8 fragment_offset[0xd]; + union { + u8 ipv4_frag[0x10]; + struct { + u8 flags[0x3]; + u8 fragment_offset[0xd]; + }; + }; u8 ipv4_total_length[0x10]; u8 checksum[0x10]; u8 reserved_at_60[0xc]; -- 2.18.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH] net/mlx5/hws: fix IPv4 frag matching 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix IPv4 frag matching Alex Vesker @ 2023-03-23 19:43 ` Raslan Darawsheh 0 siblings, 0 replies; 6+ messages in thread From: Raslan Darawsheh @ 2023-03-23 19:43 UTC (permalink / raw) To: Alex Vesker, Alex Vesker, Slava Ovsiienko, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Hamdan Igbaria, stable Hi, > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, March 23, 2023 2:34 PM > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan > Azrad <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Hamdan Igbaria > <hamdani@nvidia.com>; stable@dpdk.org > Subject: [PATCH] net/mlx5/hws: fix IPv4 frag matching > > From: Hamdan Igbaria <hamdani@nvidia.com> > > Fix IPv4 frag matching in case fragment_offset field is set to non zero value in > the mask. > fragment_offset value is converted using the following logic: > -In case fragment_offset value was set to 0x3fff, then we will match only on > ip_fragmented bit. > -Otherwise we will match fragment_offset based on spec and last same as any > other field. > > Fixes: c55c2bf35333 ("net/mlx5/hws: add definer layer") > Cc: stable@dpdk.org > > Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> > Reviewed-by: Alex Vesker <valex@nvidia.com> > Acked-by: Matan Azrad matan@nvidia.com Fixed Acked-by tag, Patch applied to next-net-mlx, Kindest regards, Raslan Darawsheh ^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH] net/mlx5/hws: fix return value of send queue action 2023-03-23 12:34 [PATCH] net/mlx5/hws: fix return value of send queue action Alex Vesker 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix send sync drain empty queue check Alex Vesker 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix IPv4 frag matching Alex Vesker @ 2023-03-23 19:41 ` Raslan Darawsheh 2 siblings, 0 replies; 6+ messages in thread From: Raslan Darawsheh @ 2023-03-23 19:41 UTC (permalink / raw) To: Alex Vesker, Alex Vesker, Slava Ovsiienko, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, stable Hi, > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, March 23, 2023 2:34 PM > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan > Azrad <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; stable@dpdk.org > Subject: [PATCH] net/mlx5/hws: fix return value of send queue action > > The rte_errno should be set to a positive error value while the ret of the > function should return a negative value this aligns code to other mlx5dr API > functions. > > Fixes: 3eb748869d2d ("net/mlx5/hws: add send layer") > Cc: stable@dpdk.org > Signed-off-by: Alex Vesker <valex@nvidia.com> > Reviewed-by: Erez Shitrit <erezsh@nvidia.com> > Acked-by: Matan Azrad matan@nvidia.com Fixed Acked-by tag, Patch applied to next-net-mlx, Kindest regards, Raslan Darawsheh ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-03-23 19:43 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-03-23 12:34 [PATCH] net/mlx5/hws: fix return value of send queue action Alex Vesker 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix send sync drain empty queue check Alex Vesker 2023-03-23 19:42 ` Raslan Darawsheh 2023-03-23 12:34 ` [PATCH] net/mlx5/hws: fix IPv4 frag matching Alex Vesker 2023-03-23 19:43 ` Raslan Darawsheh 2023-03-23 19:41 ` [PATCH] net/mlx5/hws: fix return value of send queue action Raslan Darawsheh
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).