From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0443A00C3 for ; Fri, 16 Sep 2022 20:20:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6DAB427F1; Fri, 16 Sep 2022 20:20:45 +0200 (CEST) Received: from mx0b-00169c01.pphosted.com (mx0a-00169c01.pphosted.com [67.231.148.124]) by mails.dpdk.org (Postfix) with ESMTP id E75D54021F for ; Fri, 16 Sep 2022 20:20:42 +0200 (CEST) Received: from pps.filterd (m0048493.ppops.net [127.0.0.1]) by mx0a-00169c01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28GHeOsh001123 for ; Fri, 16 Sep 2022 11:20:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paloaltonetworks.com; h=mime-version : references : in-reply-to : from : date : message-id : subject : to : cc : content-type; s=PPS12012017; bh=MB7z7KhvOIn2FHotC8NQd3oIt8NU5p5Vw3CQ0P88AiQ=; b=KuD3J9NrjH/sRtxDQsWDm3kv061dleUypoFzINIuBAFl29pX9Y2df8Gij9+8N/hp/XkH Uk9RrLU5BVta93rZ9QW/xp3HkMk5NEFqbx8Al83k+SuXL89QcekAowx3VnDi+pob0kJB JBRCGi8s/totPJNqus+G5cDSK9hh1M28yFD2oziqiFiij3rqzauHejb+aY8RIf5jKq4y 2ly15uzYAoSerXJ5t3HYQ0IjcUmXy6D0UFgi3KQM07rLUbOXsaRw4K3Wh7nDVSLBzPSf UVxQsGbCk5ka20wSH66Psc3rHvw/7kJRqL0RSBABPnrCgVZwkWorM9UAULm1Ni2MY1+x 9g== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-00169c01.pphosted.com (PPS) with ESMTPS id 3jm8yedbs6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 16 Sep 2022 11:20:41 -0700 Received: by mail-pl1-f197.google.com with SMTP id k2-20020a170902c40200b001782bd6c416so11685686plk.20 for ; Fri, 16 Sep 2022 11:20:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paloaltonetworks.com; s=google.paloaltonetworks.com; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=MB7z7KhvOIn2FHotC8NQd3oIt8NU5p5Vw3CQ0P88AiQ=; b=VKzksti+89/TUW3CuQODi1PQvRQsQ3y0BWSalRAc30Bw/cVY0KA62F7EoFlCeP8+rq g51327ptqUBdBUQEzQuUXZU/6ApO3Cg7LfiAuDrJhd+NXEXKAmmjI6dYw3pTT+PCvRaX 4WcOVDKqJAg/2iU+kt5uYLtCZcPsCAC5SLppyn1hVZVO5ApjyzOFztWqFwxVU0JwXpvB oChN7Zt71VOJ0gt8J11p/FMGQMO61FuOULhEw58jbC/0iSlFRKAwlGl9EpfOAbIO0+CG 93aCNVMimPpuDE4upkKqxMz/LTW5Q5lQcStuq0QbtsIOiDVoGu37SD8Dqyr4v5xCaiP0 5K7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=MB7z7KhvOIn2FHotC8NQd3oIt8NU5p5Vw3CQ0P88AiQ=; b=QgaRD9jGZR1o5kuMZUsNmOEAQXdEvNmMOqwCRTPK4lZ1vfEeXOcRboLl8pr97HYHiS BbmgrjzUJSt3go4E6l9oNwgu9t5ltlDBuSB/ReN9fyMbwSZIw+D0J4tfV+TivAk0acn7 ApIRxekPzh9SAHhKxQaS3jS8O+QMNDqigZrWITLy6R/Z7nvxEYB0aC+6P1NG6zOpppXI mft1U35i+7ZfPtYG7XH8MH9kXXBZ74LRZ1D2VFZY+zUxLGXhRXLN0nciqAYV28i0sLCg L5GPZ30Q8Fs//6Xk03ZZ2CsHxAsOWcDHeqMARkehfzHydja9e7epILCd/vfKaBdPeBfM pXUw== X-Gm-Message-State: ACrzQf3vhqTBFRr9e+mFBb7kuub9YDpG6hfSIJ3m8Gx/bQfHy/3C4LHA TZV4lqoVlMjvY7ap16ssmx3hLQVhXoxd+rFvehjhWcDIvMOdP7WtmdIg7uSQ+EHvcF3+nIW3MD2 69vRgCyjDOy/cwW+8dolPeY0= X-Received: by 2002:a05:6a02:186:b0:431:25fb:f1fe with SMTP id bj6-20020a056a02018600b0043125fbf1femr5795298pgb.130.1663352440724; Fri, 16 Sep 2022 11:20:40 -0700 (PDT) X-Google-Smtp-Source: AMsMyM53YWOMJcYbSI6TRNbfw2akJLyaSLGT95FMBr/29v1z4AHSJsjIMv5Sff+REb0KDWZGbCzRUQD9hKJP7DOn6BA= X-Received: by 2002:a05:6a02:186:b0:431:25fb:f1fe with SMTP id bj6-20020a056a02018600b0043125fbf1femr5795272pgb.130.1663352440224; Fri, 16 Sep 2022 11:20:40 -0700 (PDT) MIME-Version: 1.0 References: <20220829151436.1909142-1-thomas@monjalon.net> In-Reply-To: From: Amiya Mohakud Date: Fri, 16 Sep 2022 23:50:29 +0530 Message-ID: Subject: Re: [PATCH] net/mlx5: fix Rx queue recovery mechanism To: Asaf Penso Cc: "NBU-Contact-Thomas Monjalon (EXTERNAL)" , dev , "security@dpdk.org" , Matan Azrad , "stable@dpdk.org" , Alexander Kozyrev , Slava Ovsiienko , Shahaf Shuler Content-Type: multipart/alternative; boundary="00000000000056216f05e8cf6b74" X-Proofpoint-ORIG-GUID: L7BCVE4M8L494x3j85SdrnwlHEexFaMf X-Proofpoint-GUID: L7BCVE4M8L494x3j85SdrnwlHEexFaMf X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-16_12,2022-09-16_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_spam_notspam policy=outbound_spam score=0 mlxlogscore=999 impostorscore=0 suspectscore=0 mlxscore=0 lowpriorityscore=0 priorityscore=1501 spamscore=0 malwarescore=0 bulkscore=0 clxscore=1015 adultscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2209130000 definitions=main-2209160133 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org --00000000000056216f05e8cf6b74 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thank you for the clarification. On Sun, Sep 11, 2022 at 3:18 AM Asaf Penso wrote: > Hello Amiya, > > > > This fix is for an issue with the error recovery mechanism. > > > > I assume the use case you=E2=80=99re referring to is working with rx_vec = burst + > cqe zipping enabled + error recovery. > > If that=E2=80=99s the case, we mentioned in our mlx5.rst that the recover= y is not > supported. > > Since there is no way to discable the recovery in our PMD, we suggested t= o > disable cqe zipping. > > > > However, these days, we are working on a patch to allow the above > combination. > > It should be sent to the ML in a week or two, once we fully finish testin= g > it. > > > > Regards, > > Asaf Penso > > > > *From:* Amiya Mohakud > *Sent:* Wednesday, September 7, 2022 9:31 AM > *To:* NBU-Contact-Thomas Monjalon (EXTERNAL) > *Cc:* dev ; security@dpdk.org; Matan Azrad ; > stable@dpdk.org; Alexander Kozyrev ; Slava Ovsiienko > ; Shahaf Shuler > *Subject:* Re: [PATCH] net/mlx5: fix Rx queue recovery mechanism > > > > Hi All, > > > > I would need some confirmation on this patch. > > For some earlier issues encountered on mlx5, we have disable cqe_comp in > the mlx5 driver. In that case, do we still need this fix or disabling > cqe_comp will take care of it as well? > > > > Regards > > Amiya > > > > On Mon, Aug 29, 2022 at 8:45 PM Thomas Monjalon > wrote: > > From: Matan Azrad > > The local variables are getting inconsistent in data receiving routines > after queue error recovery. > Receive queue consumer index is getting wrong, need to reset one to the > size of the queue (as RQ was fully replenished in recovery procedure). > > In MPRQ case, also the local consumed strd variable should be reset. > > CVE-2022-28199 > Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error handling"= ) > Cc: stable@dpdk.org > > Signed-off-by: Alexander Kozyrev > Signed-off-by: Matan Azrad > --- > > Already applied in main branch as part of the public disclosure process. > > --- > drivers/net/mlx5/mlx5_rx.c | 34 ++++++++++++++++++++++++---------- > 1 file changed, 24 insertions(+), 10 deletions(-) > > diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c > index bb3ccc36e5..917c517b83 100644 > --- a/drivers/net/mlx5/mlx5_rx.c > +++ b/drivers/net/mlx5/mlx5_rx.c > @@ -408,6 +408,11 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) > *rxq->rq_db =3D rte_cpu_to_be_32(rxq->rq_ci); > } > > +/* Must be negative. */ > +#define MLX5_ERROR_CQE_RET (-1) > +/* Must not be negative. */ > +#define MLX5_RECOVERY_ERROR_RET 0 > + > /** > * Handle a Rx error. > * The function inserts the RQ state to reset when the first error CQE i= s > @@ -422,7 +427,7 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) > * 0 when called from non-vectorized Rx burst. > * > * @return > - * -1 in case of recovery error, otherwise the CQE status. > + * MLX5_RECOVERY_ERROR_RET in case of recovery error, otherwise the CQ= E > status. > */ > int > mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) > @@ -451,7 +456,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t > vec) > sm.queue_id =3D rxq->idx; > sm.state =3D IBV_WQS_RESET; > if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) > - return -1; > + return MLX5_RECOVERY_ERROR_RET; > if (rxq_ctrl->dump_file_n < > RXQ_PORT(rxq_ctrl)->config.max_dump_files_num) { > MKSTR(err_str, "Unexpected CQE error syndrome " > @@ -491,7 +496,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t > vec) > sm.queue_id =3D rxq->idx; > sm.state =3D IBV_WQS_RDY; > if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), > &sm)) > - return -1; > + return MLX5_RECOVERY_ERROR_RET; > if (vec) { > const uint32_t elts_n =3D > mlx5_rxq_mprq_enabled(rxq) ? > @@ -519,7 +524,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t > vec) > > rte_pktmbuf_free_seg > (*elt); > } > - return -1; > + return > MLX5_RECOVERY_ERROR_RET; > } > } > for (i =3D 0; i < (int)elts_n; ++i) { > @@ -538,7 +543,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t > vec) > } > return ret; > default: > - return -1; > + return MLX5_RECOVERY_ERROR_RET; > } > } > > @@ -556,7 +561,9 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t > vec) > * written. > * > * @return > - * 0 in case of empty CQE, otherwise the packet size in bytes. > + * 0 in case of empty CQE, MLX5_ERROR_CQE_RET in case of error CQE, > + * otherwise the packet size in regular RxQ, and striding byte > + * count format in mprq case. > */ > static inline int > mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq= e, > @@ -623,8 +630,8 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile > struct mlx5_cqe *cqe, > rxq->err_state)) { > ret =3D mlx5_rx_err_handle(rxq, 0= ); > if (ret =3D=3D MLX5_CQE_STATUS_HW= _OWN > || > - ret =3D=3D -1) > - return 0; > + ret =3D=3D MLX5_RECOVERY_ERRO= R_RET) > + return MLX5_ERROR_CQE_RET= ; > } else { > return 0; > } > @@ -869,8 +876,10 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts= , > uint16_t pkts_n) > if (!pkt) { > cqe =3D &(*rxq->cqes)[rxq->cq_ci & cqe_cnt]; > len =3D mlx5_rx_poll_len(rxq, cqe, cqe_cnt, &mcqe= ); > - if (!len) { > + if (len <=3D 0) { > rte_mbuf_raw_free(rep); > + if (unlikely(len =3D=3D MLX5_ERROR_CQE_RE= T)) > + rq_ci =3D rxq->rq_ci << sges_n; > break; > } > pkt =3D seg; > @@ -1093,8 +1102,13 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf > **pkts, uint16_t pkts_n) > } > cqe =3D &(*rxq->cqes)[rxq->cq_ci & cq_mask]; > ret =3D mlx5_rx_poll_len(rxq, cqe, cq_mask, &mcqe); > - if (!ret) > + if (ret =3D=3D 0) > break; > + if (unlikely(ret =3D=3D MLX5_ERROR_CQE_RET)) { > + rq_ci =3D rxq->rq_ci; > + consumed_strd =3D rxq->consumed_strd; > + break; > + } > byte_cnt =3D ret; > len =3D (byte_cnt & MLX5_MPRQ_LEN_MASK) >> > MLX5_MPRQ_LEN_SHIFT; > MLX5_ASSERT((int)len >=3D (rxq->crc_present << 2)); > -- > 2.36.1 > > --00000000000056216f05e8cf6b74 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Thank you for the clarification.=C2=A0

On Sun, Sep 11,= 2022 at 3:18 AM Asaf Penso <asafp@n= vidia.com> wrote:

Hello Amiya,

=C2=A0

This fix is for an issue with the error recovery mec= hanism.

=C2=A0

I assume the use case you=E2=80=99re referring to is= working with rx_vec burst + cqe zipping enabled + error recovery.

If that=E2=80=99s the case, we mentioned in our mlx5= .rst that the recovery is not supported.

Since there is no way to discable the recovery in ou= r PMD, we suggested to disable cqe zipping.

=C2=A0

However, these days, we are working on a patch to al= low the above combination.

It should be sent to the ML in a week or two, once w= e fully finish testing it.

=C2=A0

Regards,

Asaf Penso

=C2=A0

From: Amiya Mohakud <amohakud@paloaltonetworks.com= >
Sent: Wednesday, September 7, 2022 9:31 AM
To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: dev <dev@dp= dk.org>; secu= rity@dpdk.org; Matan Azrad <matan@nvidia.com>; stable@dpdk.org; Alexander Kozyrev <akozyrev@nvidia.com>; Sla= va Ovsiienko <viacheslavo@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>
Subject: Re: [PATCH] net/mlx5: fix Rx queue recovery mechanism

=C2=A0

Hi Al= l,

=C2=A0

I wou= ld need some confirmation on this patch.=C2=A0

For s= ome earlier issues encountered on mlx5, we have disable cqe_comp in the mlx= 5 driver. In that case, do we still need this fix or disabling cqe_comp wil= l take care of it as well?

=C2=A0

Regar= ds

Amiya=

=C2=A0

On Mon, Aug 29, 2022 at 8:45 PM Thomas Monjalon <= thomas@monjalon.ne= t> wrote:

From: Matan Azrad <<= a href=3D"mailto:matan@nvidia.com" target=3D"_blank">matan@nvidia.com&g= t;

The local variables are getting inconsistent in data receiving routines
after queue error recovery.
Receive queue consumer index is getting wrong, need to reset one to the
size of the queue (as RQ was fully replenished in recovery procedure).

In MPRQ case, also the local consumed strd variable should be reset.

CVE-2022-28199
Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error handli= ng")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <
akozyrev@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
---

Already applied in main branch as part of the public disclosure process.
---
=C2=A0drivers/net/mlx5/mlx5_rx.c | 34 ++++++++++++++++++++++++---------- =C2=A01 file changed, 24 insertions(+), 10 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index bb3ccc36e5..917c517b83 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -408,6 +408,11 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 *rxq->rq_db =3D rte_cpu_to_be_32(rxq->rq_= ci);
=C2=A0}

+/* Must be negative. */
+#define MLX5_ERROR_CQE_RET (-1)
+/* Must not be negative. */
+#define MLX5_RECOVERY_ERROR_RET 0
+
=C2=A0/**
=C2=A0 * Handle a Rx error.
=C2=A0 * The function inserts the RQ state to reset when the first error CQ= E is
@@ -422,7 +427,7 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq)
=C2=A0 *=C2=A0 =C2=A00 when called from non-vectorized Rx burst.
=C2=A0 *
=C2=A0 * @return
- *=C2=A0 =C2=A0-1 in case of recovery error, otherwise the CQE status.
+ *=C2=A0 =C2=A0MLX5_RECOVERY_ERROR_RET in case of recovery error, otherwis= e the CQE status.
=C2=A0 */
=C2=A0int
=C2=A0mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec)
@@ -451,7 +456,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t v= ec)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sm.queue_id =3D rxq= ->idx;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sm.state =3D IBV_WQ= S_RESET;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (mlx5_queue_stat= e_modify(RXQ_DEV(rxq_ctrl), &sm))
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0return -1;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0return MLX5_RECOVERY_ERROR_RET;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (rxq_ctrl->du= mp_file_n <
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 RXQ_P= ORT(rxq_ctrl)->config.max_dump_files_num) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 MKSTR(err_str, "Unexpected CQE error syndrome "
@@ -491,7 +496,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t v= ec)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 sm.queue_id =3D rxq->idx;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 sm.state =3D IBV_WQS_RDY;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm))
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -1;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return MLX5_RECOVERY_ERROR_RET;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 if (vec) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const uint32_t elts_n =3D
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mlx5_rxq= _mprq_enabled(rxq) ?
@@ -519,7 +524,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t v= ec)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_pktmbuf_free_seg
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 (*elt);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 }
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0return -1;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0return MLX5_RECOVERY_ERROR_RET;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 for (i =3D 0; i < (int)elts_n; ++= i) {
@@ -538,7 +543,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t v= ec)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return ret;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 default:
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -1;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return MLX5_RECOVER= Y_ERROR_RET;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0}

@@ -556,7 +561,9 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t v= ec)
=C2=A0 *=C2=A0 =C2=A0written.
=C2=A0 *
=C2=A0 * @return
- *=C2=A0 =C2=A00 in case of empty CQE, otherwise the packet size in bytes.=
+ *=C2=A0 =C2=A00 in case of empty CQE, MLX5_ERROR_CQE_RET in case of error= CQE,
+ *=C2=A0 =C2=A0otherwise the packet size in regular RxQ, and striding byte=
+ *=C2=A0 =C2=A0count format in mprq case.
=C2=A0 */
=C2=A0static inline int
=C2=A0mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe = *cqe,
@@ -623,8 +630,8 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile st= ruct mlx5_cqe *cqe,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0rxq->err_state)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D = mlx5_rx_err_handle(rxq, 0);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (ret = =3D=3D MLX5_CQE_STATUS_HW_OWN ||
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0ret =3D=3D -1)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0return 0;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0ret =3D=3D MLX5_RECOVERY_ERROR_RET)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0return MLX5_ERROR_CQE_RET;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 } else {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0= ;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
@@ -869,8 +876,10 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, = uint16_t pkts_n)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (!pkt) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 cqe =3D &(*rxq->cqes)[rxq->cq_ci & cqe_cnt];
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 len =3D mlx5_rx_poll_len(rxq, cqe, cqe_cnt, &mcqe);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0if (!len) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0if (len <=3D 0) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_mbuf_raw_free(rep);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(len =3D=3D MLX5_ERROR_CQ= E_RET))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rq_ci =3D= rxq->rq_ci << sges_n;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 break;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 pkt =3D seg;
@@ -1093,8 +1102,13 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf *= *pkts, uint16_t pkts_n)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cqe =3D &(*rxq-= >cqes)[rxq->cq_ci & cq_mask];
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D mlx5_rx_pol= l_len(rxq, cqe, cq_mask, &mcqe);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!ret)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (ret =3D=3D 0) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 break;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(ret = =3D=3D MLX5_ERROR_CQE_RET)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rq_ci =3D rxq->rq_ci;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0consumed_strd =3D rxq->consumed_strd;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0break;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 byte_cnt =3D ret; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 len =3D (byte_cnt &= amp; MLX5_MPRQ_LEN_MASK) >> MLX5_MPRQ_LEN_SHIFT;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 MLX5_ASSERT((int)le= n >=3D (rxq->crc_present << 2));
--
2.36.1

--00000000000056216f05e8cf6b74--