From: Slava Ovsiienko <viacheslavo@nvidia.com>
To: Alexander Kozyrev <akozyrev@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "stable@dpdk.org" <stable@dpdk.org>,
Raslan Darawsheh <rasland@nvidia.com>,
Matan Azrad <matan@nvidia.com>,
Dariusz Sosnowski <dsosnowski@nvidia.com>,
Bing Zhao <bingz@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>
Subject: RE: [PATCH] net/mlx5: fix shared Rx queue port number in data path
Date: Tue, 29 Oct 2024 12:34:48 +0000 [thread overview]
Message-ID: <DM4PR12MB754981992A3FFC07AA0C6F87DF4B2@DM4PR12MB7549.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20241028175358.2268101-1-akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Monday, October 28, 2024 7:54 PM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>;
> Dariusz Sosnowski <dsosnowski@nvidia.com>; Bing Zhao
> <bingz@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>
> Subject: [PATCH] net/mlx5: fix shared Rx queue port number in data path
>
> Wrong CQE is used to get the shared Rx queue port number in vectorized Rx
> burst routine. Fix the CQE indexing.
>
> Fixes: 25ed2ebff1 ("net/mlx5: support shared Rx queue port data path")
>
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---
> drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 12 ++++++------
> drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 24 ++++++++++++------------
> drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 6 +++---
> 3 files changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
> b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
> index cccfa7f2d3..f6e74f4180 100644
> --- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
> +++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
> @@ -1249,9 +1249,9 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq,
> volatile struct mlx5_cqe *cq,
> rxq_cq_to_ptype_oflags_v(rxq, cqes, opcode, &pkts[pos]);
> if (unlikely(rxq->shared)) {
> pkts[pos]->port = cq[pos].user_index_low;
> - pkts[pos + p1]->port = cq[pos + p1].user_index_low;
> - pkts[pos + p2]->port = cq[pos + p2].user_index_low;
> - pkts[pos + p3]->port = cq[pos + p3].user_index_low;
> + pkts[pos + 1]->port = cq[pos + p1].user_index_low;
> + pkts[pos + 2]->port = cq[pos + p2].user_index_low;
> + pkts[pos + 3]->port = cq[pos + p3].user_index_low;
> }
> if (rxq->hw_timestamp) {
> int offset = rxq->timestamp_offset;
> @@ -1295,17 +1295,17 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq,
> volatile struct mlx5_cqe *cq,
> metadata;
> pkts[pos]->ol_flags |= metadata ? flag : 0ULL;
> metadata = rte_be_to_cpu_32
> - (cq[pos + 1].flow_table_metadata) & mask;
> + (cq[pos + p1].flow_table_metadata) & mask;
> *RTE_MBUF_DYNFIELD(pkts[pos + 1], offs, uint32_t
> *) =
> metadata;
> pkts[pos + 1]->ol_flags |= metadata ? flag : 0ULL;
> metadata = rte_be_to_cpu_32
> - (cq[pos + 2].flow_table_metadata) & mask;
> + (cq[pos + p2].flow_table_metadata) & mask;
> *RTE_MBUF_DYNFIELD(pkts[pos + 2], offs, uint32_t
> *) =
> metadata;
> pkts[pos + 2]->ol_flags |= metadata ? flag : 0ULL;
> metadata = rte_be_to_cpu_32
> - (cq[pos + 3].flow_table_metadata) & mask;
> + (cq[pos + p3].flow_table_metadata) & mask;
> *RTE_MBUF_DYNFIELD(pkts[pos + 3], offs, uint32_t
> *) =
> metadata;
> pkts[pos + 3]->ol_flags |= metadata ? flag : 0ULL; diff
> --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
> b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
> index 3ed688191f..942d395dc9 100644
> --- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
> +++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
> @@ -835,13 +835,13 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq,
> volatile struct mlx5_cqe *cq,
> rxq_cq_to_ptype_oflags_v(rxq, ptype_info, flow_tag,
> opcode, &elts[pos]);
> if (unlikely(rxq->shared)) {
> - elts[pos]->port = container_of(p0, struct mlx5_cqe,
> + pkts[pos]->port = container_of(p0, struct mlx5_cqe,
> pkt_info)->user_index_low;
> - elts[pos + 1]->port = container_of(p1, struct
> mlx5_cqe,
> + pkts[pos + 1]->port = container_of(p1, struct
> mlx5_cqe,
> pkt_info)->user_index_low;
> - elts[pos + 2]->port = container_of(p2, struct
> mlx5_cqe,
> + pkts[pos + 2]->port = container_of(p2, struct
> mlx5_cqe,
> pkt_info)->user_index_low;
> - elts[pos + 3]->port = container_of(p3, struct
> mlx5_cqe,
> + pkts[pos + 3]->port = container_of(p3, struct
> mlx5_cqe,
> pkt_info)->user_index_low;
> }
> if (unlikely(rxq->hw_timestamp)) {
> @@ -853,34 +853,34 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq,
> volatile struct mlx5_cqe *cq,
> ts = rte_be_to_cpu_64
> (container_of(p0, struct mlx5_cqe,
> pkt_info)->timestamp);
> - mlx5_timestamp_set(elts[pos], offset,
> + mlx5_timestamp_set(pkts[pos], offset,
> mlx5_txpp_convert_rx_ts(sh, ts));
> ts = rte_be_to_cpu_64
> (container_of(p1, struct mlx5_cqe,
> pkt_info)->timestamp);
> - mlx5_timestamp_set(elts[pos + 1], offset,
> + mlx5_timestamp_set(pkts[pos + 1], offset,
> mlx5_txpp_convert_rx_ts(sh, ts));
> ts = rte_be_to_cpu_64
> (container_of(p2, struct mlx5_cqe,
> pkt_info)->timestamp);
> - mlx5_timestamp_set(elts[pos + 2], offset,
> + mlx5_timestamp_set(pkts[pos + 2], offset,
> mlx5_txpp_convert_rx_ts(sh, ts));
> ts = rte_be_to_cpu_64
> (container_of(p3, struct mlx5_cqe,
> pkt_info)->timestamp);
> - mlx5_timestamp_set(elts[pos + 3], offset,
> + mlx5_timestamp_set(pkts[pos + 3], offset,
> mlx5_txpp_convert_rx_ts(sh, ts));
> } else {
> - mlx5_timestamp_set(elts[pos], offset,
> + mlx5_timestamp_set(pkts[pos], offset,
> rte_be_to_cpu_64(container_of(p0,
> struct mlx5_cqe, pkt_info)-
> >timestamp));
> - mlx5_timestamp_set(elts[pos + 1], offset,
> + mlx5_timestamp_set(pkts[pos + 1], offset,
> rte_be_to_cpu_64(container_of(p1,
> struct mlx5_cqe, pkt_info)-
> >timestamp));
> - mlx5_timestamp_set(elts[pos + 2], offset,
> + mlx5_timestamp_set(pkts[pos + 2], offset,
> rte_be_to_cpu_64(container_of(p2,
> struct mlx5_cqe, pkt_info)-
> >timestamp));
> - mlx5_timestamp_set(elts[pos + 3], offset,
> + mlx5_timestamp_set(pkts[pos + 3], offset,
> rte_be_to_cpu_64(container_of(p3,
> struct mlx5_cqe, pkt_info)-
> >timestamp));
> }
> diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
> b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
> index 2bdd1f676d..fb59c11346 100644
> --- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
> +++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
> @@ -783,9 +783,9 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile
> struct mlx5_cqe *cq,
> rxq_cq_to_ptype_oflags_v(rxq, cqes, opcode, &pkts[pos]);
> if (unlikely(rxq->shared)) {
> pkts[pos]->port = cq[pos].user_index_low;
> - pkts[pos + p1]->port = cq[pos + p1].user_index_low;
> - pkts[pos + p2]->port = cq[pos + p2].user_index_low;
> - pkts[pos + p3]->port = cq[pos + p3].user_index_low;
> + pkts[pos + 1]->port = cq[pos + p1].user_index_low;
> + pkts[pos + 2]->port = cq[pos + p2].user_index_low;
> + pkts[pos + 3]->port = cq[pos + p3].user_index_low;
> }
> if (unlikely(rxq->hw_timestamp)) {
> int offset = rxq->timestamp_offset;
> --
> 2.43.5
prev parent reply other threads:[~2024-10-29 12:34 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-28 17:53 Alexander Kozyrev
2024-10-29 12:34 ` Slava Ovsiienko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM4PR12MB754981992A3FFC07AA0C6F87DF4B2@DM4PR12MB7549.namprd12.prod.outlook.com \
--to=viacheslavo@nvidia.com \
--cc=akozyrev@nvidia.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=rasland@nvidia.com \
--cc=stable@dpdk.org \
--cc=suanmingm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).