From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83AD1A00E6 for ; Mon, 5 Aug 2019 15:04:37 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0999E1BE20; Mon, 5 Aug 2019 15:04:30 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 843351BE20 for ; Mon, 5 Aug 2019 15:04:28 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@mellanox.com) with ESMTPS (AES256-SHA encrypted); 5 Aug 2019 16:04:25 +0300 Received: from pegasus12.mtr.labs.mlnx (pegasus12.mtr.labs.mlnx [10.210.17.40]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x75D4PA6006591; Mon, 5 Aug 2019 16:04:25 +0300 Received: from pegasus12.mtr.labs.mlnx (localhost [127.0.0.1]) by pegasus12.mtr.labs.mlnx (8.14.7/8.14.7) with ESMTP id x75D4PaM022372; Mon, 5 Aug 2019 13:04:25 GMT Received: (from viacheslavo@localhost) by pegasus12.mtr.labs.mlnx (8.14.7/8.14.7/Submit) id x75D4Pwx022371; Mon, 5 Aug 2019 13:04:25 GMT X-Authentication-Warning: pegasus12.mtr.labs.mlnx: viacheslavo set sender to viacheslavo@mellanox.com using -f From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: yskoh@mellanox.com, matan@mellanox.com Date: Mon, 5 Aug 2019 13:03:52 +0000 Message-Id: <1565010234-21769-5-git-send-email-viacheslavo@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1565010234-21769-1-git-send-email-viacheslavo@mellanox.com> References: <1565010234-21769-1-git-send-email-viacheslavo@mellanox.com> Subject: [dpdk-dev] [PATCH v2 4/6] net/mlx5: fix inline data settings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If the minimal inline data are required the data inline feature must be engaged. There were the incorrect settings enabling the entire small packet inline (in size up to 82B) which may result in sending rate declining if there is no enough cores. The same problem was raised if inline was enabled to support VLAN tag insertion by software. Fixes: 38b4b397a57d ("net/mlx5: add Tx configuration and setup") Signed-off-by: Viacheslav Ovsiienko Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_txq.c | 39 ++++++++++++++++++--------------------- 1 file changed, 18 insertions(+), 21 deletions(-) diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index fe3b4ec..81f3b40 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -784,13 +784,11 @@ struct mlx5_txq_ibv * txq_ctrl->txq.vlan_en = config->hw_vlan_insert; vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) && !config->hw_vlan_insert; - if (vlan_inline) - inlen_send = RTE_MAX(inlen_send, MLX5_ESEG_MIN_INLINE_SIZE); /* * If there are few Tx queues it is prioritized * to save CPU cycles and disable data inlining at all. */ - if ((inlen_send && priv->txqs_n >= txqs_inline) || vlan_inline) { + if (inlen_send && priv->txqs_n >= txqs_inline) { /* * The data sent with ordinal MLX5_OPCODE_SEND * may be inlined in Ethernet Segment, align the @@ -825,32 +823,31 @@ struct mlx5_txq_ibv * MLX5_WQE_CSEG_SIZE - MLX5_WQE_ESEG_SIZE - MLX5_WQE_DSEG_SIZE * 2); - txq_ctrl->txq.inlen_send = inlen_send; - txq_ctrl->txq.inlen_mode = inlen_mode; - txq_ctrl->txq.inlen_empw = 0; - } else { + } else if (inlen_mode) { /* * If minimal inlining is requested we must * enable inlining in general, despite the - * number of configured queues. + * number of configured queues. Ignore the + * txq_inline_max devarg, this is not + * full-featured inline. */ inlen_send = inlen_mode; - if (inlen_mode) { - /* - * Extend space for inline data to allow - * optional alignment of data buffer - * start address, it may improve PCIe - * performance. - */ - inlen_send = RTE_MIN(inlen_send + MLX5_WQE_SIZE, - MLX5_SEND_MAX_INLINE_LEN); - } - txq_ctrl->txq.inlen_send = inlen_send; - txq_ctrl->txq.inlen_mode = inlen_mode; - txq_ctrl->txq.inlen_empw = 0; + inlen_empw = 0; + } else if (vlan_inline) { + /* + * Hardware does not report offload for + * VLAN insertion, we must enable data inline + * to implement feature by software. + */ + inlen_send = MLX5_ESEG_MIN_INLINE_SIZE; + inlen_empw = 0; + } else { inlen_send = 0; inlen_empw = 0; } + txq_ctrl->txq.inlen_send = inlen_send; + txq_ctrl->txq.inlen_mode = inlen_mode; + txq_ctrl->txq.inlen_empw = 0; if (inlen_send && inlen_empw && priv->txqs_n >= txqs_inline) { /* * The data sent with MLX5_OPCODE_ENHANCED_MPSW -- 1.8.3.1