From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25D16A0548; Fri, 23 Apr 2021 11:46:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A03D416C9; Fri, 23 Apr 2021 11:46:39 +0200 (CEST) Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by mails.dpdk.org (Postfix) with ESMTP id 88B09410DD for ; Fri, 23 Apr 2021 11:46:36 +0200 (CEST) Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FRTrp5NMZz1BHCv; Fri, 23 Apr 2021 17:44:10 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Fri, 23 Apr 2021 17:46:27 +0800 From: Chengchang Tang To: CC: , , , , Date: Fri, 23 Apr 2021 17:46:41 +0800 Message-ID: <1619171202-28486-2-git-send-email-tangchengchang@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1619171202-28486-1-git-send-email-tangchengchang@huawei.com> References: <1618571071-5927-1-git-send-email-tangchengchang@huawei.com> <1619171202-28486-1-git-send-email-tangchengchang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH 1/2] net/bonding: support Tx prepare for bonding X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To use the HW offloads capability (e.g. checksum and TSO) in the Tx direction, the upper-layer users need to call rte_eth_dev_prepare to do some adjustment to the packets before sending them (e.g. processing pseudo headers when Tx checksum offoad enabled). But, the tx_prepare callback of the bond driver is not implemented. Therefore, related offloads can not be used unless the upper layer users process the packet properly in their own application. But it is bad for the transplantability. However, it is difficult to design the tx_prepare callback for bonding driver. Because when a bonded device sends packets, the bonded device allocates the packets to different slave devices based on the real-time link status and bonding mode. That is, it is very difficult for the bonding device to determine which slave device's prepare function should be invoked. In addition, if the link status changes after the packets are prepared, the packets may fail to be sent because packets allocation may change. So, in this patch, the tx_prepare callback of bonding driver is not implemented. Instead, the rte_eth_dev_tx_prepare() will be called for all the fast path packet in mode 0, 1, 2, 4, 5, 6. In this way, all tx_offloads can be processed correctly for all NIC devices in these modes. If tx_prepare is not required in some cases, then slave PMDs tx_prepare pointer should be NULL and rte_eth_tx_prepare() will be just a NOOP. In these cases, the impact on performance will be very limited. It is the responsibility of the slave PMDs to decide when the real tx_prepare needs to be used. The information from dev_config/queue_setup is sufficient for them to make these decisions. Note: The rte_eth_tx_prepare is not added to bond mode 3(Broadcast). This is because in broadcast mode, a packet needs to be sent by all slave ports. Different PMDs process the packets differently in tx_prepare. As a result, the sent packet may be incorrect. Signed-off-by: Chengchang Tang --- drivers/net/bonding/rte_eth_bond.h | 1 - drivers/net/bonding/rte_eth_bond_pmd.c | 28 ++++++++++++++++++++++++---- 2 files changed, 24 insertions(+), 5 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h index 874aa91..1e6cc6d 100644 --- a/drivers/net/bonding/rte_eth_bond.h +++ b/drivers/net/bonding/rte_eth_bond.h @@ -343,7 +343,6 @@ rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id, int rte_eth_bond_link_up_prop_delay_get(uint16_t bonded_port_id); - #ifdef __cplusplus } #endif diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 2e9cea5..84af348 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -606,8 +606,14 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs, /* Send packet burst on each slave device */ for (i = 0; i < num_of_slaves; i++) { if (slave_nb_pkts[i] > 0) { + int nb_prep_pkts; + + nb_prep_pkts = rte_eth_tx_prepare(slaves[i], + bd_tx_q->queue_id, slave_bufs[i], + slave_nb_pkts[i]); + num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id, - slave_bufs[i], slave_nb_pkts[i]); + slave_bufs[i], nb_prep_pkts); /* if tx burst fails move packets to end of bufs */ if (unlikely(num_tx_slave < slave_nb_pkts[i])) { @@ -632,6 +638,7 @@ bond_ethdev_tx_burst_active_backup(void *queue, { struct bond_dev_private *internals; struct bond_tx_queue *bd_tx_q; + int nb_prep_pkts; bd_tx_q = (struct bond_tx_queue *)queue; internals = bd_tx_q->dev_private; @@ -639,8 +646,11 @@ bond_ethdev_tx_burst_active_backup(void *queue, if (internals->active_slave_count < 1) return 0; + nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port, + bd_tx_q->queue_id, bufs, nb_pkts); + return rte_eth_tx_burst(internals->current_primary_port, bd_tx_q->queue_id, - bufs, nb_pkts); + bufs, nb_prep_pkts); } static inline uint16_t @@ -939,6 +949,8 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } for (i = 0; i < num_of_slaves; i++) { + int nb_prep_pkts; + rte_eth_macaddr_get(slaves[i], &active_slave_addr); for (j = num_tx_total; j < nb_pkts; j++) { if (j + 3 < nb_pkts) @@ -955,9 +967,12 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) #endif } - num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id, + nb_prep_pkts = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id, bufs + num_tx_total, nb_pkts - num_tx_total); + num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id, + bufs + num_tx_total, nb_prep_pkts); + if (num_tx_total == nb_pkts) break; } @@ -1159,12 +1174,17 @@ tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs, /* Send packet burst on each slave device */ for (i = 0; i < slave_count; i++) { + int nb_prep_pkts; + if (slave_nb_bufs[i] == 0) continue; + nb_prep_pkts = rte_eth_tx_prepare(slave_port_ids[i], + bd_tx_q->queue_id, slave_bufs[i], + slave_nb_bufs[i]); slave_tx_count = rte_eth_tx_burst(slave_port_ids[i], bd_tx_q->queue_id, slave_bufs[i], - slave_nb_bufs[i]); + nb_prep_pkts); total_tx_count += slave_tx_count; -- 2.7.4