From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 5C7F02C4F for ; Wed, 5 Sep 2018 11:14:08 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 4A23A14005D; Wed, 5 Sep 2018 09:14:07 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 5 Sep 2018 02:14:04 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Wed, 5 Sep 2018 02:14:04 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w859E2Yt020156; Wed, 5 Sep 2018 10:14:02 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id ABB981626D1; Wed, 5 Sep 2018 10:14:02 +0100 (BST) From: Andrew Rybchenko To: Declan Doherty , Chas Williams CC: , Ivan Malov Date: Wed, 5 Sep 2018 10:13:37 +0100 Message-ID: <1536138818-12342-1-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24074.004 X-TM-AS-Result: No-10.850300-4.000000-10 X-TMASE-MatchedRID: NBo8rD77vi3yTBeqcpWTVkf49ONH0RaS1KoSW5Ji1Xtb8pv4L0h+IvLP 5d9AyIkeAxAMNKkH4HEQshaw/wpmRptrN4GYJt6esyNb+yeIRAq4vBuE2X0Hlclgi/vLS272kKb VKhmQLFM1C2f8ND2KYOKOmN63egZIkKjL2IOi2LA49w4rSKMAfYfsPVs/8Vw64+jrr9da6v/7ef lM5Fbki5hCiXW+u8UrpqAckONivx6J+w2BcN2shn84FZpy/6JVCI3p+Ju8mqoUl9bvAS7WQCH1W qUsPIUnmZM6T9wsGgflgKBq27tSmx2MHJ6MfMgPfxzygoxuBhgxmbT6wQT2a7uqk4cq52pzeD2Z 5cF1MACYcTpdT5CyY7nsDuawR3R+HxPMjOKY7A8LbigRnpKlKSPzRlrdFGDwqQBTYEauJpyZrW8 dYGc8fp1dVAV7EoK07X9hqQ1o8gJHeYYpK3H9Kw== X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--10.850300-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24074.004 X-MDID: 1536138848-6rkUH59k4QRw Subject: [dpdk-dev] [PATCH 1/2] net/bonding: provide default Rx/Tx configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Sep 2018 09:14:08 -0000 From: Ivan Malov Default Rx/Tx configuration has become a helpful resource for applications relying on the optimal values to make rte_eth_rxconf and rte_eth_txconf structures. These structures can then be tweaked. Default configuration is also used internally by rte_eth_rx_queue_setup or rte_eth_tx_queue_setup API calls when NULL pointer is passed by callers with the argument for custom queue configuration. The use cases of bonding driver may also benefit from exercising default settings in the same way. Restructure the code to collect various settings from slave ports and make it possible to combine default Rx/Tx configuration of these devices and report it to the callers of rte_eth_dev_info_get. Signed-off-by: Ivan Malov Signed-off-by: Andrew Rybchenko --- drivers/net/bonding/rte_eth_bond_api.c | 161 +++++++++++++++++---- drivers/net/bonding/rte_eth_bond_pmd.c | 10 ++ drivers/net/bonding/rte_eth_bond_private.h | 3 + 3 files changed, 147 insertions(+), 27 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c index 8bc04cfd1..206a5c797 100644 --- a/drivers/net/bonding/rte_eth_bond_api.c +++ b/drivers/net/bonding/rte_eth_bond_api.c @@ -269,6 +269,136 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals) return 0; } +static void +eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals, + const struct rte_eth_dev_info *di) +{ + struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf; + + internals->reta_size = di->reta_size; + + /* Inherit Rx offload capabilities from the first slave device */ + internals->rx_offload_capa = di->rx_offload_capa; + internals->rx_queue_offload_capa = di->rx_queue_offload_capa; + internals->flow_type_rss_offloads = di->flow_type_rss_offloads; + + /* Inherit maximum Rx packet size from the first slave device */ + internals->candidate_max_rx_pktlen = di->max_rx_pktlen; + + /* Inherit default Rx queue settings from the first slave device */ + memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i)); + + /* + * Turn off descriptor prefetch and writeback by default for all + * slave devices. Applications may tweak this setting if need be. + */ + rxconf_i->rx_thresh.pthresh = 0; + rxconf_i->rx_thresh.hthresh = 0; + rxconf_i->rx_thresh.wthresh = 0; + + /* Setting this to zero should effectively enable default values */ + rxconf_i->rx_free_thresh = 0; + + /* Disable deferred start by default for all slave devices */ + rxconf_i->rx_deferred_start = 0; +} + +static void +eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals, + const struct rte_eth_dev_info *di) +{ + struct rte_eth_txconf *txconf_i = &internals->default_txconf; + + /* Inherit Tx offload capabilities from the first slave device */ + internals->tx_offload_capa = di->tx_offload_capa; + internals->tx_queue_offload_capa = di->tx_queue_offload_capa; + + /* Inherit default Tx queue settings from the first slave device */ + memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i)); + + /* + * Turn off descriptor prefetch and writeback by default for all + * slave devices. Applications may tweak this setting if need be. + */ + txconf_i->tx_thresh.pthresh = 0; + txconf_i->tx_thresh.hthresh = 0; + txconf_i->tx_thresh.wthresh = 0; + + /* + * Setting these parameters to zero assumes that default + * values will be configured implicitly by slave devices. + */ + txconf_i->tx_free_thresh = 0; + txconf_i->tx_rs_thresh = 0; + + /* Disable deferred start by default for all slave devices */ + txconf_i->tx_deferred_start = 0; +} + +static void +eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals, + const struct rte_eth_dev_info *di) +{ + struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf; + const struct rte_eth_rxconf *rxconf = &di->default_rxconf; + + internals->rx_offload_capa &= di->rx_offload_capa; + internals->rx_queue_offload_capa &= di->rx_queue_offload_capa; + internals->flow_type_rss_offloads &= di->flow_type_rss_offloads; + + /* + * If at least one slave device suggests enabling this + * setting by default, enable it for all slave devices + * since disabling it may not be necessarily supported. + */ + if (rxconf->rx_drop_en == 1) + rxconf_i->rx_drop_en = 1; + + /* + * Adding a new slave device may cause some of previously inherited + * offloads to be withdrawn from the internal rx_queue_offload_capa + * value. Thus, the new internal value of default Rx queue offloads + * has to be masked by rx_queue_offload_capa to make sure that only + * commonly supported offloads are preserved from both the previous + * value and the value being inhereted from the new slave device. + */ + rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) & + internals->rx_queue_offload_capa; + + /* + * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be + * the power of 2, the lower one is GCD + */ + if (internals->reta_size > di->reta_size) + internals->reta_size = di->reta_size; + + if (!internals->max_rx_pktlen && + di->max_rx_pktlen < internals->candidate_max_rx_pktlen) + internals->candidate_max_rx_pktlen = di->max_rx_pktlen; +} + +static void +eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals, + const struct rte_eth_dev_info *di) +{ + struct rte_eth_txconf *txconf_i = &internals->default_txconf; + const struct rte_eth_txconf *txconf = &di->default_txconf; + + internals->tx_offload_capa &= di->tx_offload_capa; + internals->tx_queue_offload_capa &= di->tx_queue_offload_capa; + + /* + * Adding a new slave device may cause some of previously inherited + * offloads to be withdrawn from the internal tx_queue_offload_capa + * value. Thus, the new internal value of default Tx queue offloads + * has to be masked by tx_queue_offload_capa to make sure that only + * commonly supported offloads are preserved from both the previous + * value and the value being inhereted from the new slave device. + */ + txconf_i->offloads = (txconf_i->offloads | txconf->offloads) & + internals->tx_queue_offload_capa; +} + static int __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id) { @@ -326,34 +456,11 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id) internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues; internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues; - internals->reta_size = dev_info.reta_size; - - /* Take the first dev's offload capabilities */ - internals->rx_offload_capa = dev_info.rx_offload_capa; - internals->tx_offload_capa = dev_info.tx_offload_capa; - internals->rx_queue_offload_capa = dev_info.rx_queue_offload_capa; - internals->tx_queue_offload_capa = dev_info.tx_queue_offload_capa; - internals->flow_type_rss_offloads = dev_info.flow_type_rss_offloads; - - /* Inherit first slave's max rx packet size */ - internals->candidate_max_rx_pktlen = dev_info.max_rx_pktlen; - + eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info); + eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info); } else { - internals->rx_offload_capa &= dev_info.rx_offload_capa; - internals->tx_offload_capa &= dev_info.tx_offload_capa; - internals->rx_queue_offload_capa &= dev_info.rx_queue_offload_capa; - internals->tx_queue_offload_capa &= dev_info.tx_queue_offload_capa; - internals->flow_type_rss_offloads &= dev_info.flow_type_rss_offloads; - - /* RETA size is GCD of all slaves RETA sizes, so, if all sizes will be - * the power of 2, the lower one is GCD - */ - if (internals->reta_size > dev_info.reta_size) - internals->reta_size = dev_info.reta_size; - - if (!internals->max_rx_pktlen && - dev_info.max_rx_pktlen < internals->candidate_max_rx_pktlen) - internals->candidate_max_rx_pktlen = dev_info.max_rx_pktlen; + eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info); + eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info); } bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &= diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index b84f32263..ee24e9658 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -2234,6 +2234,11 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_queues = max_nb_rx_queues; dev_info->max_tx_queues = max_nb_tx_queues; + memcpy(&dev_info->default_rxconf, &internals->default_rxconf, + sizeof(dev_info->default_rxconf)); + memcpy(&dev_info->default_txconf, &internals->default_txconf, + sizeof(dev_info->default_txconf)); + /** * If dedicated hw queues enabled for link bonding device in LACP mode * then we need to reduce the maximum number of data path queues by 1. @@ -3054,6 +3059,11 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode) /* Initially allow to choose any offload type */ internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK; + memset(&internals->default_rxconf, 0, + sizeof(internals->default_rxconf)); + memset(&internals->default_txconf, 0, + sizeof(internals->default_txconf)); + memset(internals->active_slaves, 0, sizeof(internals->active_slaves)); memset(internals->slaves, 0, sizeof(internals->slaves)); diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h index 43e0e448d..d12a0ebbe 100644 --- a/drivers/net/bonding/rte_eth_bond_private.h +++ b/drivers/net/bonding/rte_eth_bond_private.h @@ -160,6 +160,9 @@ struct bond_dev_private { /** Bit mask of RSS offloads, the bit offset also means flow type */ uint64_t flow_type_rss_offloads; + struct rte_eth_rxconf default_rxconf; /**< Default RxQ conf. */ + struct rte_eth_txconf default_txconf; /**< Default TxQ conf. */ + uint16_t reta_size; struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE]; -- 2.17.1