From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 496EEA00C5 for ; Wed, 3 Aug 2022 11:52:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 42C2040A7E; Wed, 3 Aug 2022 11:52:56 +0200 (CEST) Received: from smtp-relay-internal-0.canonical.com (smtp-relay-internal-0.canonical.com [185.125.188.122]) by mails.dpdk.org (Postfix) with ESMTP id A695440141 for ; Wed, 3 Aug 2022 11:52:54 +0200 (CEST) Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com [209.85.208.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 74CBF3F129 for ; Wed, 3 Aug 2022 09:52:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1659520374; bh=NfKncOXbgUkH4IBfMuvGqOJHpyPs6Euc3zdL6DctruI=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=aYWgrvfRsS0oe+v7J8OeNyboOYxGW9AhIdJMeUNyzH+wW8vjY1W6rn4JNLyd3O6OT ZIZnz72yWcpCQRMvS9n+/l+6xuy77PgMWcfg5SyE9wzRgKzBarYWe+SfpyzDysR4U2 aZoNph8qGEituJX3rBHQps/HVP5wXpFzRveCJHHyUDNsmo4bY3eEKJswlGsXieHXN/ 6RMAz5J30g4wHhnfLv5CDTYaVmuFM8gl0anCk7B3UBx7W6ipu4rC7Ou7p1WUKPDWv9 Lw3oWCu55EsknUPlswdM/HRmWMKxrb4q+U3NH56lZR1GPRTdQKkEggeHgp5OMl0kHn OSW7iRzlXxsiQ== Received: by mail-lj1-f200.google.com with SMTP id y11-20020a05651c220b00b0025e4bd7731fso2342661ljq.3 for ; Wed, 03 Aug 2022 02:52:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=NfKncOXbgUkH4IBfMuvGqOJHpyPs6Euc3zdL6DctruI=; b=RoEqfWq0r5XUhcYH5ZD/VbeidXnYlKNZMWgV/Eg6T8ok+hIqluDBCKgbkArV2fksW/ y2HfLy93o1V0e1ASIJz6EVtIv7YwxITcwIymuoKo0c1VNtqoShBkOfMKwSZnpTpwGSdH CSXhoIf1dDkKIVcfGfANGio34GPTC/tpnF62Ms9clJYxDD3uDz8TEy/rB4IrxTLb0cdP 7U3tIu43KPjvWAIVGSsihwfgaql8LPIn0TmwP9nC3bzjfrLzdUnj1mMstWVYXN7KBYSB qUfJ6uSwEnAVY1E8zMNVC3xv0EVK3zWXQEi7/ISokdHvhSxGdpu1MXTWvTYxg5JxRIPr 2prQ== X-Gm-Message-State: ACgBeo0wEaufcxNsWAyZkoZ6gT8eLEeNDLNxjkAdKYaDF1JuBwXjkP/l y0s+xQmHvwjjs6HY6yU4F3d7TygGjnj7n8+6d54dZqUBedjUbL27gCP5YQXKHAI2t6XuVkaj0IM rul0S5A59490BDnbkC0X+j9uMKpNPpzMtSTXQPSen X-Received: by 2002:a05:6512:1189:b0:48b:26d2:b13e with SMTP id g9-20020a056512118900b0048b26d2b13emr358686lfr.37.1659520373485; Wed, 03 Aug 2022 02:52:53 -0700 (PDT) X-Google-Smtp-Source: AA6agR6eLIr7FFAVHP7RuiohUsGaKi4t6ePaP2d2uk9U2dRy+VhbJAv+x5LcMxQYDkChVvmRJ8IPsOPwyrMClMCaJhk= X-Received: by 2002:a05:6512:1189:b0:48b:26d2:b13e with SMTP id g9-20020a056512118900b0048b26d2b13emr358682lfr.37.1659520373207; Wed, 03 Aug 2022 02:52:53 -0700 (PDT) MIME-Version: 1.0 References: <0fd721555b1621dd5f5d32474cb37e92260e8888.1657976262.git.rahul.lakkireddy@chelsio.com> In-Reply-To: <0fd721555b1621dd5f5d32474cb37e92260e8888.1657976262.git.rahul.lakkireddy@chelsio.com> From: Christian Ehrhardt Date: Wed, 3 Aug 2022 11:52:26 +0200 Message-ID: Subject: Re: [PATCH 19.11] net/cxgbe: fix Tx queue stuck with mbuf chain coalescing To: Rahul Lakkireddy Cc: stable@dpdk.org Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On Sat, Jul 16, 2022 at 3:04 PM Rahul Lakkireddy wrote: > > [ upstream commit 151e828f6427667faf3fdfaa00d14a65c7f57cd6 ] Thank you, your patch was in time, but sadly a few fell through the cracks on my side. (and many more just arrived too late). Your patch is applied to the WIP branch now, but currently testing of -rc1 is going on which I do not want to disrupt. If we need an -rc2 anyway or generally have the time to do an -rc2 without too much disruption it will be in 19.11.13, otherwise it is already queued for 19.11.14 > When trying to coalesce mbufs with chain on Tx side, it is possible > to get stuck during queue wrap around. When coalescing this mbuf > chain fails, the Tx path returns EBUSY and when the same packet > is retried again, it couldn't get coalesced again, and the loop > repeats. Fix by pushing the packet through the normal Tx path. > Also use FW_ETH_TX_PKTS_WR to handle mbufs with chain for FW > to optimize. > > Fixes: 6c2809628cd5 ("net/cxgbe: improve latency for slow traffic") > Cc: stable@dpdk.org > > Signed-off-by: Rahul Lakkireddy > --- > drivers/net/cxgbe/sge.c | 38 +++++++++++++++++++++++--------------- > 1 file changed, 23 insertions(+), 15 deletions(-) > > diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c > index 61ee218be..f3ff576cf 100644 > --- a/drivers/net/cxgbe/sge.c > +++ b/drivers/net/cxgbe/sge.c > @@ -793,9 +793,9 @@ static inline void txq_advance(struct sge_txq *q, unsigned int n) > > #define MAX_COALESCE_LEN 64000 > > -static inline int wraps_around(struct sge_txq *q, int ndesc) > +static inline bool wraps_around(struct sge_txq *q, int ndesc) > { > - return (q->pidx + ndesc) > q->size ? 1 : 0; > + return (q->pidx + ndesc) > q->size ? true : false; > } > > static void tx_timer_cb(void *data) > @@ -846,7 +846,6 @@ static inline void ship_tx_pkt_coalesce_wr(struct adapter *adap, > > /* fill the pkts WR header */ > wr = (void *)&q->desc[q->pidx]; > - wr->op_pkd = htonl(V_FW_WR_OP(FW_ETH_TX_PKTS2_WR)); > vmwr = (void *)&q->desc[q->pidx]; > > wr_mid = V_FW_WR_LEN16(DIV_ROUND_UP(q->coalesce.flits, 2)); > @@ -856,8 +855,11 @@ static inline void ship_tx_pkt_coalesce_wr(struct adapter *adap, > wr->npkt = q->coalesce.idx; > wr->r3 = 0; > if (is_pf4(adap)) { > - wr->op_pkd = htonl(V_FW_WR_OP(FW_ETH_TX_PKTS2_WR)); > wr->type = q->coalesce.type; > + if (likely(wr->type != 0)) > + wr->op_pkd = htonl(V_FW_WR_OP(FW_ETH_TX_PKTS2_WR)); > + else > + wr->op_pkd = htonl(V_FW_WR_OP(FW_ETH_TX_PKTS_WR)); > } else { > wr->op_pkd = htonl(V_FW_WR_OP(FW_ETH_TX_PKTS_VM_WR)); > vmwr->r4 = 0; > @@ -936,13 +938,16 @@ static inline int should_tx_packet_coalesce(struct sge_eth_txq *txq, > ndesc = DIV_ROUND_UP(q->coalesce.flits + flits, 8); > credits = txq_avail(q) - ndesc; > > + if (unlikely(wraps_around(q, ndesc))) > + return 0; > + > /* If we are wrapping or this is last mbuf then, send the > * already coalesced mbufs and let the non-coalesce pass > * handle the mbuf. > */ > - if (unlikely(credits < 0 || wraps_around(q, ndesc))) { > + if (unlikely(credits < 0)) { > ship_tx_pkt_coalesce_wr(adap, txq); > - return 0; > + return -EBUSY; > } > > /* If the max coalesce len or the max WR len is reached > @@ -966,8 +971,12 @@ static inline int should_tx_packet_coalesce(struct sge_eth_txq *txq, > ndesc = flits_to_desc(q->coalesce.flits + flits); > credits = txq_avail(q) - ndesc; > > - if (unlikely(credits < 0 || wraps_around(q, ndesc))) > + if (unlikely(wraps_around(q, ndesc))) > return 0; > + > + if (unlikely(credits < 0)) > + return -EBUSY; > + > q->coalesce.flits += wr_size / sizeof(__be64); > q->coalesce.type = type; > q->coalesce.ptr = (unsigned char *)&q->desc[q->pidx] + > @@ -1110,7 +1119,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf, > unsigned int flits, ndesc, cflits; > int l3hdr_len, l4hdr_len, eth_xtra_len; > int len, last_desc; > - int credits; > + int should_coal, credits; > u32 wr_mid; > u64 cntrl, *end; > bool v6; > @@ -1141,9 +1150,9 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf, > /* align the end of coalesce WR to a 512 byte boundary */ > txq->q.coalesce.max = (8 - (txq->q.pidx & 7)) * 8; > > - if (!((m->ol_flags & PKT_TX_TCP_SEG) || > - m->pkt_len > RTE_ETHER_MAX_LEN)) { > - if (should_tx_packet_coalesce(txq, mbuf, &cflits, adap)) { > + if ((m->ol_flags & PKT_TX_TCP_SEG) == 0) { > + should_coal = should_tx_packet_coalesce(txq, mbuf, &cflits, adap); > + if (should_coal > 0) { > if (unlikely(map_mbuf(mbuf, addr) < 0)) { > dev_warn(adap, "%s: mapping err for coalesce\n", > __func__); > @@ -1152,8 +1161,8 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf, > } > return tx_do_packet_coalesce(txq, mbuf, cflits, adap, > pi, addr, nb_pkts); > - } else { > - return -EBUSY; > + } else if (should_coal < 0) { > + return should_coal; > } > } > > @@ -1200,8 +1209,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf, > end = (u64 *)vmwr + flits; > } > > - len = 0; > - len += sizeof(*cpl); > + len = sizeof(*cpl); > > /* Coalescing skipped and we send through normal path */ > if (!(m->ol_flags & PKT_TX_TCP_SEG)) { > -- > 2.27.0 > -- Christian Ehrhardt Senior Staff Engineer, Ubuntu Server Canonical Ltd