From: "Min Hu (Connor)" <humin29@huawei.com>
To: <dev@dpdk.org>
Cc: <ferruh.yigit@intel.com>
Subject: [dpdk-dev] [PATCH 5/5] net/hns3: select Tx prepare based on Tx offload
Date: Wed, 28 Apr 2021 15:20:55 +0800 [thread overview]
Message-ID: <1619594455-56787-6-git-send-email-humin29@huawei.com> (raw)
In-Reply-To: <1619594455-56787-1-git-send-email-humin29@huawei.com>
From: Chengchang Tang <tangchengchang@huawei.com>
Tx prepare should be called only when necessary to reduce the impact on
performance.
For partial TX offload, users need to call rte_eth_tx_prepare() to invoke
the tx_prepare callback of PMDs. In this callback, the PMDs adjust the
packet based on the offloading used by the user. (e.g. For some PMDs,
pseudo-headers need to be calculated when the TX cksum is offloaded.)
However, for the users, they cannot grasp all the hardware and PMDs
characteristics. As a result, users cannot decide when they need to
actually call tx_prepare. Therefore, we should assume that the user calls
rte_eth_tx_prepare() when using any Tx offloading to ensure that related
functions work properly. Whether packets need to be adjusted should be
determined by PMDs. They can make judgments in the dev_configure or
queue_setup phase. When the related function is not used, the pointer of
tx_prepare should be set to NULL to reduce the performance loss caused by
invoking rte_eth_tx_repare().
In this patch, if tx_prepare is not required for the offloading used by
the users, the tx_prepare pointer will be set to NULL.
Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/hns3/hns3_rxtx.c | 36 +++++++++++++++++++++++++++++++++---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3881a72..7ac3a48 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4203,17 +4203,45 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
}
+static bool
+hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
+{
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ /* always perform tx_prepare when debug */
+ return true;
+#else
+#define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
+ DEV_TX_OFFLOAD_IPV4_CKSUM | \
+ DEV_TX_OFFLOAD_TCP_CKSUM | \
+ DEV_TX_OFFLOAD_UDP_CKSUM | \
+ DEV_TX_OFFLOAD_SCTP_CKSUM | \
+ DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ DEV_TX_OFFLOAD_TCP_TSO | \
+ DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ DEV_TX_OFFLOAD_GRE_TNL_TSO | \
+ DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+
+ uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
+ if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
+ return true;
+
+ return false;
+#endif
+}
+
static eth_tx_burst_t
hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
{
struct hns3_adapter *hns = dev->data->dev_private;
bool vec_allowed, sve_allowed, simple_allowed;
- bool vec_support;
+ bool vec_support, tx_prepare_needed;
vec_support = hns3_tx_check_vec_support(dev) == 0;
vec_allowed = vec_support && hns3_get_default_vec_support();
sve_allowed = vec_support && hns3_get_sve_support();
simple_allowed = hns3_tx_check_simple_support(dev);
+ tx_prepare_needed = hns3_get_tx_prep_needed(dev);
*prep = NULL;
@@ -4224,7 +4252,8 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
if (hns->tx_func_hint == HNS3_IO_FUNC_HINT_SIMPLE && simple_allowed)
return hns3_xmit_pkts_simple;
if (hns->tx_func_hint == HNS3_IO_FUNC_HINT_COMMON) {
- *prep = hns3_prep_pkts;
+ if (tx_prepare_needed)
+ *prep = hns3_prep_pkts;
return hns3_xmit_pkts;
}
@@ -4233,7 +4262,8 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
if (simple_allowed)
return hns3_xmit_pkts_simple;
- *prep = hns3_prep_pkts;
+ if (tx_prepare_needed)
+ *prep = hns3_prep_pkts;
return hns3_xmit_pkts;
}
--
2.7.4
next prev parent reply other threads:[~2021-04-28 7:21 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-28 7:20 [dpdk-dev] [PATCH 0/5] Features and bugfix for hns3 PMD Min Hu (Connor)
2021-04-28 7:20 ` [dpdk-dev] [PATCH 1/5] net/hns3: support preferred burst size and queues in VF Min Hu (Connor)
2021-04-28 7:20 ` [dpdk-dev] [PATCH 2/5] net/hns3: log time delta in decimal format Min Hu (Connor)
2021-04-28 7:20 ` [dpdk-dev] [PATCH 3/5] net/hns3: fix use wrong time API Min Hu (Connor)
2021-04-28 7:20 ` [dpdk-dev] [PATCH 4/5] net/hns3: delete unused macro of cmd module Min Hu (Connor)
2021-04-28 7:20 ` Min Hu (Connor) [this message]
2021-05-07 9:26 ` [dpdk-dev] [PATCH 5/5] net/hns3: select Tx prepare based on Tx offload David Marchand
2021-05-07 10:23 ` Ferruh Yigit
2021-05-07 12:20 ` [dpdk-dev] [PATCH] net/hns3: fix debug build Thomas Monjalon
2021-05-07 12:25 ` David Marchand
2021-05-07 12:49 ` Thomas Monjalon
2021-05-08 0:47 ` Min Hu (Connor)
2021-04-29 16:27 ` [dpdk-dev] [PATCH 0/5] Features and bugfix for hns3 PMD Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1619594455-56787-6-git-send-email-humin29@huawei.com \
--to=humin29@huawei.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).