From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id CF6635911 for ; Fri, 24 Feb 2017 09:48:28 +0100 (CET) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Feb 2017 00:48:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.35,200,1484035200"; d="scan'208";a="61759896" Received: from unknown (HELO dpdk5.bj.intel.com) ([172.16.182.188]) by orsmga004.jf.intel.com with ESMTP; 24 Feb 2017 00:48:27 -0800 From: Zhiyong Yang To: dev@dpdk.org Cc: Helin Zhang , Konstantin Ananyev , Zhiyong Yang Date: Fri, 24 Feb 2017 16:48:19 +0800 Message-Id: <1487926101-4637-4-git-send-email-zhiyong.yang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1487926101-4637-1-git-send-email-zhiyong.yang@intel.com> References: <1487926101-4637-1-git-send-email-zhiyong.yang@intel.com> Subject: [dpdk-dev] [PATCH 3/5] net/ixgbe: remove limit of ixgbe_xmit_pkts_vec burst size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Feb 2017 08:48:29 -0000 To add a wrapper function ixgbe_xmit_pkts_vec_simple to remove the limit of tx burst size and implement the "make an best effort to transmit the pkts" policy. The patch makes ixgbe vec function work in a consistent behavior like ixgbe_xmit_pkts_simple and ixgbe_xmit _pkts do that. Cc: Helin Zhang Cc: Konstantin Ananyev Signed-off-by: Zhiyong Yang --- drivers/net/ixgbe/ixgbe_rxtx.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 9502432..8b80903 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -363,6 +363,31 @@ ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_tx; } +static uint16_t +ixgbe_xmit_pkts_vec_simple(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx = 0; + struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; + + if (likely(nb_pkts <= txq->tx_rs_thresh)) + return ixgbe_xmit_pkts_vec(tx_queue, tx_pkts, nb_pkts); + + /* transmit in chunks of at least txq->tx_rs_thresh */ + while (nb_pkts) { + uint16_t ret, num; + + num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh); + ret = ixgbe_xmit_pkts_vec(tx_queue, &tx_pkts[nb_tx], num); + nb_tx += ret; + nb_pkts -= ret; + if (ret < num) + break; + } + + return nb_tx; +} + static inline void ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq, volatile struct ixgbe_adv_tx_context_desc *ctx_txd, @@ -2355,7 +2380,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq) (rte_eal_process_type() != RTE_PROC_PRIMARY || ixgbe_txq_vec_setup(txq) == 0)) { PMD_INIT_LOG(DEBUG, "Vector tx enabled."); - dev->tx_pkt_burst = ixgbe_xmit_pkts_vec; + dev->tx_pkt_burst = ixgbe_xmit_pkts_vec_simple; } else #endif dev->tx_pkt_burst = ixgbe_xmit_pkts_simple; -- 2.7.4