From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F003EA0093; Wed, 9 Nov 2022 06:09:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 94AE1400D7; Wed, 9 Nov 2022 06:09:19 +0100 (CET) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id D5AD4400D4; Wed, 9 Nov 2022 06:09:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667970557; x=1699506557; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QxlwpVdts4lkwinus+azV9uWLCOoezAcI+TCk7c+V5M=; b=BIn3BYOdHCN7W1+Q1FCdBCSlh+8+pJUj3ZibCguFTwo9dhGRoKKmKr9o jWBwHPTEKPJoTua81OlY4iQf5kz2vktmkNpfEO007JLrXpZPa0cv1h5GN 7B2fsKhV993PZN4gbtpnoD9SKh0cv87AcDs0MvNhpEp4Ns31ZK+y6Aj4u FjD5e7R3K35EBSuHNeXvkLXqu1RQAefMCX6zbwAkzZTgAr+rT+8q1ylL1 +YsiT5+9AKCPCN7Uz09YS4i5Y2yiQtuioPl6EhX8YPqeBmDV6qlaNNLbu k6v/BezYHhnqlD707/W3TmeOn1VfAI3dM9P/Sp9vyHeCY2ssE0sauDMbF A==; X-IronPort-AV: E=McAfee;i="6500,9779,10525"; a="373036302" X-IronPort-AV: E=Sophos;i="5.96,149,1665471600"; d="scan'208";a="373036302" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Nov 2022 21:09:15 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10525"; a="631123560" X-IronPort-AV: E=Sophos;i="5.96,149,1665471600"; d="scan'208";a="631123560" Received: from unknown (HELO yemj..) ([10.239.252.253]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Nov 2022 21:09:12 -0800 From: Mingjin Ye To: dev@dpdk.org Cc: qiming.yang@intel.com, stable@dpdk.org, yidingx.zhou@intel.com, Mingjin Ye , Qi Zhang , Wenzhuo Lu , Jingjing Wu , Xiaoyun Li , Ferruh Yigit Subject: [PATCH v2] net/ice: fix scalar Rx and Tx path segment Date: Wed, 9 Nov 2022 12:56:09 +0000 Message-Id: <20221109125609.724612-1-mingjinx.ye@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221103172040.388518-1-mingjinx.ye@intel.com> References: <20221103172040.388518-1-mingjinx.ye@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org CRC is stripped by the hardware in the scattered Rx path. If the last buffer packet length is '0', the scalar Tx path would send empty buffer that causes the Tx queue to overflow. This patch adds a judgment for the last buffer length to fix this issue, so that it would free the mbuf associated to the last one if the last buffer is empty. Fixes: 6eac0b7fde95 ("net/ice: support advance Rx/Tx") Cc: stable@dpdk.org Signed-off-by: Mingjin Ye v2: * Fix log level in ice_rxtx.c source file. --- drivers/net/ice/ice_rxtx.c | 53 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 51 insertions(+), 2 deletions(-) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 0a2b0376ac..b181f66aad 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -2111,6 +2111,10 @@ ice_recv_scattered_pkts(void *rx_queue, } else rxm->data_len = (uint16_t)(rx_packet_len - RTE_ETHER_CRC_LEN); + } else if (rx_packet_len == 0) { + rte_pktmbuf_free_seg(rxm); + first_seg->nb_segs--; + last_seg->next = NULL; } first_seg->port = rxq->port_id; @@ -2903,6 +2907,35 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt) return count; } +/*Check the number of valid mbufs and free the invalid mbufs*/ +static inline uint16_t +ice_check_mbuf(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *txd = tx_pkt; + struct rte_mbuf *txd_removal = NULL; + struct rte_mbuf *txd_pre = NULL; + uint16_t count = 0; + uint16_t removal = 0; + + while (txd != NULL) { + if (removal == 1 || txd->data_len == 0) { + txd_removal = txd; + txd = txd->next; + if (removal == 0) { + removal = 1; + txd_pre->next = NULL; + } + rte_pktmbuf_free_seg(txd_removal); + } else { + ++count; + txd_pre = txd; + txd = txd->next; + } + } + + return count; +} + uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -2960,11 +2993,27 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) * the mbuf data size exceeds max data size that hw allows * per tx desc. */ - if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) + if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) { nb_used = (uint16_t)(ice_calc_pkt_desc(tx_pkt) + nb_ctx); - else + } else { + nb_used = ice_check_mbuf(tx_pkt); + if (nb_used == 0) { + PMD_TX_LOG(ERR, + "Check packets is empty " + "(port=%d queue=%d)\n", + txq->port_id, txq->queue_id); + continue; + } else if (nb_used < tx_pkt->nb_segs) { + PMD_TX_LOG(DEBUG, + "Check packets valid num =" + "%4u total num = %4u (port=%d queue=%d)\n", + nb_used, tx_pkt->nb_segs, txq->port_id, txq->queue_id); + tx_pkt->nb_segs = nb_used; + } nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx); + } + tx_last = (uint16_t)(tx_id + nb_used - 1); /* Circular ring */ -- 2.34.1