From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
To: <dev@dpdk.org>
Cc: <rasland@nvidia.com>, <matan@nvidia.com>, <alialnu@nvidia.com>,
<stable@dpdk.org>
Subject: [dpdk-dev] [PATCH] net/mlx5: fix multi-segment inline for the first segment
Date: Tue, 22 Jun 2021 19:40:49 +0300 [thread overview]
Message-ID: <20210622164049.9191-1-viacheslavo@nvidia.com> (raw)
If the first segment in the multi-segment packet is short
and below the inline threshold it should be inline into
the WQE to improve the performance. For example, the T-Rex
traffic generator might use small leading segments to
handle packet headers and performance was affected.
Fixes: cacb44a09962 ("net/mlx5: add no-inline Tx flag")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_tx.h | 28 +++++++++++++---------------
1 file changed, 13 insertions(+), 15 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index e8b1c0f108..1a35919371 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -2041,6 +2041,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
unsigned int nxlen;
uintptr_t start;
+ mbuf = loc->mbuf;
+ nxlen = rte_pktmbuf_data_len(mbuf);
/*
* Packet length exceeds the allowed inline data length,
* check whether the minimal inlining is required.
@@ -2050,28 +2052,23 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
MLX5_ESEG_MIN_INLINE_SIZE);
MLX5_ASSERT(txq->inlen_mode <= txq->inlen_send);
inlen = txq->inlen_mode;
- } else {
- if (loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE ||
- !vlan || txq->vlan_en) {
- /*
- * VLAN insertion will be done inside by HW.
- * It is not utmost effective - VLAN flag is
- * checked twice, but we should proceed the
- * inlining length correctly and take into
- * account the VLAN header being inserted.
- */
- return mlx5_tx_packet_multi_send
- (txq, loc, olx);
- }
+ } else if (vlan && !txq->vlan_en) {
+ /*
+ * VLAN insertion is requested and hardware does not
+ * support the offload, will do with software inline.
+ */
inlen = MLX5_ESEG_MIN_INLINE_SIZE;
+ } else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE ||
+ nxlen > txq->inlen_send) {
+ return mlx5_tx_packet_multi_send(txq, loc, olx);
+ } else {
+ goto do_first;
}
/*
* Now we know the minimal amount of data is requested
* to inline. Check whether we should inline the buffers
* from the chain beginning to eliminate some mbufs.
*/
- mbuf = loc->mbuf;
- nxlen = rte_pktmbuf_data_len(mbuf);
if (unlikely(nxlen <= txq->inlen_send)) {
/* We can inline first mbuf at least. */
if (nxlen < inlen) {
@@ -2093,6 +2090,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
goto do_align;
}
}
+do_first:
do {
inlen = nxlen;
mbuf = NEXT(mbuf);
--
2.18.1
next reply other threads:[~2021-06-22 16:41 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-22 16:40 Viacheslav Ovsiienko [this message]
2021-06-30 7:46 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210622164049.9191-1-viacheslavo@nvidia.com \
--to=viacheslavo@nvidia.com \
--cc=alialnu@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=rasland@nvidia.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).