From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 97EB7A0A0E; Wed, 28 Apr 2021 09:21:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06668412A7; Wed, 28 Apr 2021 09:21:01 +0200 (CEST) Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by mails.dpdk.org (Postfix) with ESMTP id BE1B0410EB for ; Wed, 28 Apr 2021 09:20:53 +0200 (CEST) Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FVVNG0b74z16Mhk for ; Wed, 28 Apr 2021 15:18:22 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Wed, 28 Apr 2021 15:20:48 +0800 From: "Min Hu (Connor)" To: CC: Date: Wed, 28 Apr 2021 15:20:55 +0800 Message-ID: <1619594455-56787-6-git-send-email-humin29@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1619594455-56787-1-git-send-email-humin29@huawei.com> References: <1619594455-56787-1-git-send-email-humin29@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH 5/5] net/hns3: select Tx prepare based on Tx offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Chengchang Tang Tx prepare should be called only when necessary to reduce the impact on performance. For partial TX offload, users need to call rte_eth_tx_prepare() to invoke the tx_prepare callback of PMDs. In this callback, the PMDs adjust the packet based on the offloading used by the user. (e.g. For some PMDs, pseudo-headers need to be calculated when the TX cksum is offloaded.) However, for the users, they cannot grasp all the hardware and PMDs characteristics. As a result, users cannot decide when they need to actually call tx_prepare. Therefore, we should assume that the user calls rte_eth_tx_prepare() when using any Tx offloading to ensure that related functions work properly. Whether packets need to be adjusted should be determined by PMDs. They can make judgments in the dev_configure or queue_setup phase. When the related function is not used, the pointer of tx_prepare should be set to NULL to reduce the performance loss caused by invoking rte_eth_tx_repare(). In this patch, if tx_prepare is not required for the offloading used by the users, the tx_prepare pointer will be set to NULL. Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations") Cc: stable@dpdk.org Signed-off-by: Chengchang Tang Signed-off-by: Min Hu (Connor) --- drivers/net/hns3/hns3_rxtx.c | 36 +++++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 3881a72..7ac3a48 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -4203,17 +4203,45 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev) return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)); } +static bool +hns3_get_tx_prep_needed(struct rte_eth_dev *dev) +{ +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + /* always perform tx_prepare when debug */ + return true; +#else +#define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\ + DEV_TX_OFFLOAD_IPV4_CKSUM | \ + DEV_TX_OFFLOAD_TCP_CKSUM | \ + DEV_TX_OFFLOAD_UDP_CKSUM | \ + DEV_TX_OFFLOAD_SCTP_CKSUM | \ + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \ + DEV_TX_OFFLOAD_TCP_TSO | \ + DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \ + DEV_TX_OFFLOAD_GRE_TNL_TSO | \ + DEV_TX_OFFLOAD_GENEVE_TNL_TSO) + + uint64_t tx_offload = dev->data->dev_conf.txmode.offloads; + if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK) + return true; + + return false; +#endif +} + static eth_tx_burst_t hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep) { struct hns3_adapter *hns = dev->data->dev_private; bool vec_allowed, sve_allowed, simple_allowed; - bool vec_support; + bool vec_support, tx_prepare_needed; vec_support = hns3_tx_check_vec_support(dev) == 0; vec_allowed = vec_support && hns3_get_default_vec_support(); sve_allowed = vec_support && hns3_get_sve_support(); simple_allowed = hns3_tx_check_simple_support(dev); + tx_prepare_needed = hns3_get_tx_prep_needed(dev); *prep = NULL; @@ -4224,7 +4252,8 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep) if (hns->tx_func_hint == HNS3_IO_FUNC_HINT_SIMPLE && simple_allowed) return hns3_xmit_pkts_simple; if (hns->tx_func_hint == HNS3_IO_FUNC_HINT_COMMON) { - *prep = hns3_prep_pkts; + if (tx_prepare_needed) + *prep = hns3_prep_pkts; return hns3_xmit_pkts; } @@ -4233,7 +4262,8 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep) if (simple_allowed) return hns3_xmit_pkts_simple; - *prep = hns3_prep_pkts; + if (tx_prepare_needed) + *prep = hns3_prep_pkts; return hns3_xmit_pkts; } -- 2.7.4