From: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
To: dev@dpdk.org
Cc: yskoh@mellanox.com, shahafs@mellanox.com
Subject: [dpdk-dev] [PATCH] net/mlx5: fix inline data settings
Date: Sun, 4 Aug 2019 13:56:35 +0000 [thread overview]
Message-ID: <1564926995-31618-1-git-send-email-viacheslavo@mellanox.com> (raw)
If the minimal inline data are required the data inline feature
must be engaged. There were the incorrect settings enabling the
entire small packet inline (in size up to 82B) which may result
in sending rate declining if there is no enough cores. The same
problem was raised if inline was enabled to support VLAN tag
insertion by software.
Fixes: 38b4b397a57d ("net/mlx5: add Tx configuration and setup")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5_txq.c | 37 +++++++++++++++----------------------
1 file changed, 15 insertions(+), 22 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index fe3b4ec..a0e27bb 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -784,13 +784,11 @@ struct mlx5_txq_ibv *
txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
!config->hw_vlan_insert;
- if (vlan_inline)
- inlen_send = RTE_MAX(inlen_send, MLX5_ESEG_MIN_INLINE_SIZE);
/*
* If there are few Tx queues it is prioritized
* to save CPU cycles and disable data inlining at all.
*/
- if ((inlen_send && priv->txqs_n >= txqs_inline) || vlan_inline) {
+ if (inlen_send && priv->txqs_n >= txqs_inline) {
/*
* The data sent with ordinal MLX5_OPCODE_SEND
* may be inlined in Ethernet Segment, align the
@@ -825,32 +823,27 @@ struct mlx5_txq_ibv *
MLX5_WQE_CSEG_SIZE -
MLX5_WQE_ESEG_SIZE -
MLX5_WQE_DSEG_SIZE * 2);
- txq_ctrl->txq.inlen_send = inlen_send;
- txq_ctrl->txq.inlen_mode = inlen_mode;
- txq_ctrl->txq.inlen_empw = 0;
- } else {
+ } else if (inlen_mode) {
/*
* If minimal inlining is requested we must
* enable inlining in general, despite the
- * number of configured queues.
+ * number of configured queues. Ignore the
+ * txq_inline_max devarg, this is not
+ * full-featured inline.
*/
inlen_send = inlen_mode;
- if (inlen_mode) {
- /*
- * Extend space for inline data to allow
- * optional alignment of data buffer
- * start address, it may improve PCIe
- * performance.
- */
- inlen_send = RTE_MIN(inlen_send + MLX5_WQE_SIZE,
- MLX5_SEND_MAX_INLINE_LEN);
- }
- txq_ctrl->txq.inlen_send = inlen_send;
- txq_ctrl->txq.inlen_mode = inlen_mode;
- txq_ctrl->txq.inlen_empw = 0;
- inlen_send = 0;
inlen_empw = 0;
+ } else if (vlan_inline) {
+ /*
+ * Hardware does not report offload for
+ * VLAN insertion, we must enable data inline
+ * to implement feature by software.
+ */
+ inlen_send = MLX5_ESEG_MIN_INLINE_SIZE;
}
+ txq_ctrl->txq.inlen_send = inlen_send;
+ txq_ctrl->txq.inlen_mode = inlen_mode;
+ txq_ctrl->txq.inlen_empw = 0;
if (inlen_send && inlen_empw && priv->txqs_n >= txqs_inline) {
/*
* The data sent with MLX5_OPCODE_ENHANCED_MPSW
--
1.8.3.1
next reply other threads:[~2019-08-04 13:56 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-04 13:56 Viacheslav Ovsiienko [this message]
2019-08-05 6:41 ` Matan Azrad
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1564926995-31618-1-git-send-email-viacheslavo@mellanox.com \
--to=viacheslavo@mellanox.com \
--cc=dev@dpdk.org \
--cc=shahafs@mellanox.com \
--cc=yskoh@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).