From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10B4AA0548 for ; Tue, 17 Aug 2021 13:55:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2EDEC4014E; Tue, 17 Aug 2021 13:55:45 +0200 (CEST) Received: from smtp-relay-canonical-1.canonical.com (smtp-relay-canonical-1.canonical.com [185.125.188.121]) by mails.dpdk.org (Postfix) with ESMTP id E975A4014E for ; Tue, 17 Aug 2021 13:55:43 +0200 (CEST) Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-canonical-1.canonical.com (Postfix) with ESMTPS id ECA1F3F36B for ; Tue, 17 Aug 2021 11:55:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1629201342; bh=o9OqAszJZKI8oO0c5/CWhqJcbnz2DqhojtrD5t4+0mo=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=vylrmfFd1yrxLgOcDxmR/b57WvI5WNiIfU8tSdNfFBmdJvWg3KsauNC9Vl69tyNYR heM9d5Plf0LUV2oVsXcxShgkri944My6O6ZpVYv0dH2wE7Qnj7GA+X0682/hJIBqfY 6yO9j8ywU1g53bkcGV3Iryar9ayIC08rrIHcJCzNQ7+uHG5zeZXJLoRNCEJGGLdcuP 9rGrnyv01fumc3GxdKaBD4V28CBzvrkHsvKEKjQIWAXFhUb9EqS1+CvLm8mARUOiC1 VYpJUcMlzqpOQY2WKa/GuXr9nqSfFQk+3RuT+1xfht7xvHWO1l2idfiJZDTmk6+diE 5VuoI9PdeBvIw== Received: by mail-qt1-f198.google.com with SMTP id t35-20020a05622a1823b02902647b518455so153080qtc.3 for ; Tue, 17 Aug 2021 04:55:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=o9OqAszJZKI8oO0c5/CWhqJcbnz2DqhojtrD5t4+0mo=; b=basklMQ0i26x06mfKDexwysj48nsxgWILf2zyQN9gzjaRk05GOjc0bnZU9YzVFKD99 KMsr8NPdxMgjwWb/FqNz2MLALwonqompF4VpRfzQ8g6HxZyBo0u4Pvn6fxCPISScIs5e tzvNCe0gZHeMGNiksFx7fn0hks2AGC/aKMF/5NxwAqXLRrnmponrc2Pfpfx4xZKU8mtj Ne1BMNHgTFlp5+PJjikE2pSR9IcpTAAX2NpziVyrlOVNoJOkaiYIP7KxNM8vfj4pTdIb GD8tuL7eblQGrhlDAMeg6t3jkZHbEGKgsLn4l5CyFnYHZ/H+AsJGgom9geA9/9TYxSkI 1+Tg== X-Gm-Message-State: AOAM53002KHp6DU4uf8ClYStfiNCGCwE2g3n1mJD4drnGoU5yswbYuZy mbw6jnx3WUtlL+HJwjR4+JeIyylWwIaYBtYFpJGGd+SLhM1bDtquul8eEV+1eHJB5fZh51baKja S9d2jSX4kBeWR1r7NziJ45g8LrU1o+rKjPq2brVxO X-Received: by 2002:a37:43d7:: with SMTP id q206mr3374847qka.114.1629201342052; Tue, 17 Aug 2021 04:55:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzfptL8S/bVHgGYCgaI0Vf8nPklPHmSCddd9Np5k2t1w7CliLF3XMHzOjQ+4RtEFlXXiM1d7vGZrxbqsf1DseI= X-Received: by 2002:a37:43d7:: with SMTP id q206mr3374832qka.114.1629201341853; Tue, 17 Aug 2021 04:55:41 -0700 (PDT) MIME-Version: 1.0 References: <20210816162952.1931473-1-bingz@nvidia.com> <20210816162952.1931473-7-bingz@nvidia.com> In-Reply-To: <20210816162952.1931473-7-bingz@nvidia.com> From: Christian Ehrhardt Date: Tue, 17 Aug 2021 13:55:15 +0200 Message-ID: To: Bing Zhao Cc: dpdk stable , Viacheslav Ovsiienko , Matan Azrad Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-stable] [PATCH 19.11 6/6] net/mlx5: fix multi-segment inline for the first segments X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On Mon, Aug 16, 2021 at 6:30 PM Bing Zhao wrote: > > From: Viacheslav Ovsiienko > > [ upstream commit ec837ad0fc7c6df4912cc2706b9cd54b225f4a34 ] While not applying this causes build fails on some platforms. Looks like: [ 974s] ../drivers/net/mlx5/mlx5_rxtx.c: In function =E2=80=98mlx5_tx_packet_multi_inline=E2=80=99: [ 974s] ../drivers/net/mlx5/mlx5_rxtx.c:3356:31: error: =E2=80=98PKT_TX_DYNF_NOINLINE=E2=80=99 undeclared (first use in this functi= on) [ 974s] 3356 | } else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE || [ 974s] | ^~~~~~~~~~~~~~~~~~~~ [ 974s] ../drivers/net/mlx5/mlx5_rxtx.c:3356:31: note: each undeclared identifier is reported only once for each function it appears in [ 974s] ninja: build stopped: subcommand failed. And indeed this is the only occurrence $ grep -Hrn PKT_TX_DYNF_NOINLINE * drivers/net/mlx5/mlx5_rxtx.c:3356: } else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE || Since it only happens on some releases I'd assume the other arch/distros just do not build this? It seems to only affect those building with meson. For now I've removed this patch again from 19.11 - please have a look and let me know if you'll provide a refreshed backport. > Before 19.08 release the Tx burst routines of mlx5 PMD > provided data inline for the first short segments of the > multi-segment packets. In the release 19.08 mlx5 Tx datapath > was refactored and this behavior was broken, affecting the > performance. > > For example, the T-Rex traffic generator might use small > leading segments to handle packet headers and performance > degradation was noticed. > > If the first segments of the multi-segment packet are short > and the overall length is below the inline threshold it > should be inline into the WQE to fix the performance. > > Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template") > Cc: stable@dpdk.org > > Signed-off-by: Viacheslav Ovsiienko > Signed-off-by: Bing Zhao > --- > drivers/net/mlx5/mlx5_rxtx.c | 27 +++++++++++++-------------- > 1 file changed, 13 insertions(+), 14 deletions(-) > > diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c > index 73dbf68d2b..094e359e55 100644 > --- a/drivers/net/mlx5/mlx5_rxtx.c > +++ b/drivers/net/mlx5/mlx5_rxtx.c > @@ -3336,6 +3336,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *r= estrict txq, > unsigned int nxlen; > uintptr_t start; > > + mbuf =3D loc->mbuf; > + nxlen =3D rte_pktmbuf_data_len(mbuf); > /* > * Packet length exceeds the allowed inline > * data length, check whether the minimal > @@ -3345,27 +3347,23 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data = *restrict txq, > assert(txq->inlen_mode >=3D MLX5_ESEG_MIN_INLINE_= SIZE); > assert(txq->inlen_mode <=3D txq->inlen_send); > inlen =3D txq->inlen_mode; > - } else { > - if (!vlan || txq->vlan_en) { > - /* > - * VLAN insertion will be done inside by = HW. > - * It is not utmost effective - VLAN flag= is > - * checked twice, but we should proceed t= he > - * inlining length correctly and take int= o > - * account the VLAN header being inserted= . > - */ > - return mlx5_tx_packet_multi_send > - (txq, loc, olx); > - } > + } else if (vlan && !txq->vlan_en) { > + /* > + * VLAN insertion is requested and hardware does = not > + * support the offload, will do with software inl= ine. > + */ > inlen =3D MLX5_ESEG_MIN_INLINE_SIZE; > + } else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE || > + nxlen > txq->inlen_send) { > + return mlx5_tx_packet_multi_send(txq, loc, olx); > + } else { > + goto do_first; > } > /* > * Now we know the minimal amount of data is requested > * to inline. Check whether we should inline the buffers > * from the chain beginning to eliminate some mbufs. > */ > - mbuf =3D loc->mbuf; > - nxlen =3D rte_pktmbuf_data_len(mbuf); > if (unlikely(nxlen <=3D txq->inlen_send)) { > /* We can inline first mbuf at least. */ > if (nxlen < inlen) { > @@ -3387,6 +3385,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *r= estrict txq, > goto do_align; > } > } > +do_first: > do { > inlen =3D nxlen; > mbuf =3D NEXT(mbuf); > -- > 2.21.0 > --=20 Christian Ehrhardt Staff Engineer, Ubuntu Server Canonical Ltd