From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <3chas3@gmail.com> Received: from mail-it1-f196.google.com (mail-it1-f196.google.com [209.85.166.196]) by dpdk.org (Postfix) with ESMTP id 329492BF4 for ; Mon, 24 Sep 2018 16:22:35 +0200 (CEST) Received: by mail-it1-f196.google.com with SMTP id w200-v6so2505766itc.4 for ; Mon, 24 Sep 2018 07:22:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:sender:from:date:message-id :subject:to:cc; bh=GV7p3qS1UhMROjXUEVtt/e553tg6AJ2aAnl+Y/u2vi8=; b=KgP4Vv5N7QQbj8/HRK9DdHV1uqezbpG36LB5edHeiu1u8N+Zx6MLCSuZYgM87bK6SS UjaprccjnrbLjCA90xesn2o/pPI5v4iUfbjkZs6Y7bnI7igJuG3eRvqFca0jk1qyNSAr 1XZ+vjI+iDZ0tlKmVGlyfCMPnhjl/bdlsmp+BHPHzzwp+FelwOZumYpJmqfoclX11OMz P7LIVM1k/Sq2Tk7zE8zbeyoulkowzJdB+f9dkw63CVkDAlAUiwJTyPUngZenjS/wheTI StTWKquJD2ONnV1HMxqXlBWCwEgeYKI2wz0t/Dj7byL4LbhfViKKtBfl1zhlaY72+vdH Xi9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:sender:from :date:message-id:subject:to:cc; bh=GV7p3qS1UhMROjXUEVtt/e553tg6AJ2aAnl+Y/u2vi8=; b=FNp1rSqUex3vjiAqdCR/iTVAckVNSDkp9cMZblpTQYj/raBBNSrcEr+RUb5WSwuaS8 VYEEOVy8GecAiP0oGg5CHSiFHfvjC8PWVoITMSbiI3Y9Jlrei4Wi705cX82lclkhJYga PgrDJBiGbCvfNKvLm1L3tkQ6zv6jlb4YbHlDkAyAIunwU/KD6tLssEflLK4adt0x5DCM lMu80bDEgrPSOBway9pwiR+QYJmBUVFpQ/b6J2CwXhxu3RDmZx7eabFmgD/9FciwTujj bHC5GYGLvhcwI1fklsY1hoNY/AE7QYz11ZuVfJD+TExWq/n2Z4RZuYQrzGJGS/L1tXk2 FMNQ== X-Gm-Message-State: APzg51Bo4cfftisoBCrz53R7lMq4XC64c2gSRGUU7WwogxTChXVAZtn9 1cfBKyoWiGngNpfY6Oti7crQGs5scFmWIEK6O70= X-Google-Smtp-Source: ANB0VdYmUppwkiqDmpu2VNAe8gCh1H0eKsRY47XMEu6cf6LC0mH362mltZ7orWhstRqClOBYMTgg1F2SXPRrtjqXs2E= X-Received: by 2002:a02:10c6:: with SMTP id 189-v6mr9655391jay.54.1537798954429; Mon, 24 Sep 2018 07:22:34 -0700 (PDT) MIME-Version: 1.0 References: <1536138818-12342-1-git-send-email-arybchenko@solarflare.com> In-Reply-To: <1536138818-12342-1-git-send-email-arybchenko@solarflare.com> Sender: chasmosaurus@gmail.com X-Google-Sender-Delegation: chasmosaurus@gmail.com From: Chas Williams <3chas3@gmail.com> Date: Mon, 24 Sep 2018 10:22:23 -0400 X-Google-Sender-Auth: qTSYTb_x7393Lw4cty9n4Pnewqo Message-ID: To: arybchenko@solarflare.com Cc: Declan Doherty , Chas Williams , dev@dpdk.org, ivan.malov@oktetlabs.ru Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH 1/2] net/bonding: provide default Rx/Tx configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Sep 2018 14:22:35 -0000 On Wed, Sep 5, 2018 at 5:14 AM Andrew Rybchenko wrote: > > From: Ivan Malov > > Default Rx/Tx configuration has become a helpful > resource for applications relying on the optimal > values to make rte_eth_rxconf and rte_eth_txconf > structures. These structures can then be tweaked. > > Default configuration is also used internally by > rte_eth_rx_queue_setup or rte_eth_tx_queue_setup > API calls when NULL pointer is passed by callers > with the argument for custom queue configuration. > > The use cases of bonding driver may also benefit > from exercising default settings in the same way. > > Restructure the code to collect various settings > from slave ports and make it possible to combine > default Rx/Tx configuration of these devices and > report it to the callers of rte_eth_dev_info_get. > > Signed-off-by: Ivan Malov > Signed-off-by: Andrew Rybchenko Acked-by: Chas Williams > --- > drivers/net/bonding/rte_eth_bond_api.c | 161 +++++++++++++++++---- > drivers/net/bonding/rte_eth_bond_pmd.c | 10 ++ > drivers/net/bonding/rte_eth_bond_private.h | 3 + > 3 files changed, 147 insertions(+), 27 deletions(-) > > diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c > index 8bc04cfd1..206a5c797 100644 > --- a/drivers/net/bonding/rte_eth_bond_api.c > +++ b/drivers/net/bonding/rte_eth_bond_api.c > @@ -269,6 +269,136 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals) > return 0; > } > > +static void > +eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals, > + const struct rte_eth_dev_info *di) > +{ > + struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf; > + > + internals->reta_size = di->reta_size; > + > + /* Inherit Rx offload capabilities from the first slave device */ > + internals->rx_offload_capa = di->rx_offload_capa; > + internals->rx_queue_offload_capa = di->rx_queue_offload_capa; > + internals->flow_type_rss_offloads = di->flow_type_rss_offloads; > + > + /* Inherit maximum Rx packet size from the first slave device */ > + internals->candidate_max_rx_pktlen = di->max_rx_pktlen; > + > + /* Inherit default Rx queue settings from the first slave device */ > + memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i)); > + > + /* > + * Turn off descriptor prefetch and writeback by default for all > + * slave devices. Applications may tweak this setting if need be. > + */ > + rxconf_i->rx_thresh.pthresh = 0; > + rxconf_i->rx_thresh.hthresh = 0; > + rxconf_i->rx_thresh.wthresh = 0; > + > + /* Setting this to zero should effectively enable default values */ > + rxconf_i->rx_free_thresh = 0; > + > + /* Disable deferred start by default for all slave devices */ > + rxconf_i->rx_deferred_start = 0; > +} > + > +static void > +eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals, > + const struct rte_eth_dev_info *di) > +{ > + struct rte_eth_txconf *txconf_i = &internals->default_txconf; > + > + /* Inherit Tx offload capabilities from the first slave device */ > + internals->tx_offload_capa = di->tx_offload_capa; > + internals->tx_queue_offload_capa = di->tx_queue_offload_capa; > + > + /* Inherit default Tx queue settings from the first slave device */ > + memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i)); > + > + /* > + * Turn off descriptor prefetch and writeback by default for all > + * slave devices. Applications may tweak this setting if need be. > + */ > + txconf_i->tx_thresh.pthresh = 0; > + txconf_i->tx_thresh.hthresh = 0; > + txconf_i->tx_thresh.wthresh = 0; > + > + /* > + * Setting these parameters to zero assumes that default > + * values will be configured implicitly by slave devices. > + */ > + txconf_i->tx_free_thresh = 0; > + txconf_i->tx_rs_thresh = 0; > + > + /* Disable deferred start by default for all slave devices */ > + txconf_i->tx_deferred_start = 0; > +} > + > +static void > +eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals, > + const struct rte_eth_dev_info *di) > +{ > + struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf; > + const struct rte_eth_rxconf *rxconf = &di->default_rxconf; > + > + internals->rx_offload_capa &= di->rx_offload_capa; > + internals->rx_queue_offload_capa &= di->rx_queue_offload_capa; > + internals->flow_type_rss_offloads &= di->flow_type_rss_offloads; > + > + /* > + * If at least one slave device suggests enabling this > + * setting by default, enable it for all slave devices > + * since disabling it may not be necessarily supported. > + */ > + if (rxconf->rx_drop_en == 1) > + rxconf_i->rx_drop_en = 1; > + > + /* > + * Adding a new slave device may cause some of previously inherited > + * offloads to be withdrawn from the internal rx_queue_offload_capa > + * value. Thus, the new internal value of default Rx queue offloads > + * has to be masked by rx_queue_offload_capa to make sure that only > + * commonly supported offloads are preserved from both the previous > + * value and the value being inhereted from the new slave device. > + */ > + rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) & > + internals->rx_queue_offload_capa; > + > + /* > + * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be > + * the power of 2, the lower one is GCD > + */ > + if (internals->reta_size > di->reta_size) > + internals->reta_size = di->reta_size; > + > + if (!internals->max_rx_pktlen && > + di->max_rx_pktlen < internals->candidate_max_rx_pktlen) > + internals->candidate_max_rx_pktlen = di->max_rx_pktlen; > +} > + > +static void > +eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals, > + const struct rte_eth_dev_info *di) > +{ > + struct rte_eth_txconf *txconf_i = &internals->default_txconf; > + const struct rte_eth_txconf *txconf = &di->default_txconf; > + > + internals->tx_offload_capa &= di->tx_offload_capa; > + internals->tx_queue_offload_capa &= di->tx_queue_offload_capa; > + > + /* > + * Adding a new slave device may cause some of previously inherited > + * offloads to be withdrawn from the internal tx_queue_offload_capa > + * value. Thus, the new internal value of default Tx queue offloads > + * has to be masked by tx_queue_offload_capa to make sure that only > + * commonly supported offloads are preserved from both the previous > + * value and the value being inhereted from the new slave device. > + */ > + txconf_i->offloads = (txconf_i->offloads | txconf->offloads) & > + internals->tx_queue_offload_capa; > +} > + > static int > __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id) > { > @@ -326,34 +456,11 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id) > internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues; > internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues; > > - internals->reta_size = dev_info.reta_size; > - > - /* Take the first dev's offload capabilities */ > - internals->rx_offload_capa = dev_info.rx_offload_capa; > - internals->tx_offload_capa = dev_info.tx_offload_capa; > - internals->rx_queue_offload_capa = dev_info.rx_queue_offload_capa; > - internals->tx_queue_offload_capa = dev_info.tx_queue_offload_capa; > - internals->flow_type_rss_offloads = dev_info.flow_type_rss_offloads; > - > - /* Inherit first slave's max rx packet size */ > - internals->candidate_max_rx_pktlen = dev_info.max_rx_pktlen; > - > + eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info); > + eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info); > } else { > - internals->rx_offload_capa &= dev_info.rx_offload_capa; > - internals->tx_offload_capa &= dev_info.tx_offload_capa; > - internals->rx_queue_offload_capa &= dev_info.rx_queue_offload_capa; > - internals->tx_queue_offload_capa &= dev_info.tx_queue_offload_capa; > - internals->flow_type_rss_offloads &= dev_info.flow_type_rss_offloads; > - > - /* RETA size is GCD of all slaves RETA sizes, so, if all sizes will be > - * the power of 2, the lower one is GCD > - */ > - if (internals->reta_size > dev_info.reta_size) > - internals->reta_size = dev_info.reta_size; > - > - if (!internals->max_rx_pktlen && > - dev_info.max_rx_pktlen < internals->candidate_max_rx_pktlen) > - internals->candidate_max_rx_pktlen = dev_info.max_rx_pktlen; > + eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info); > + eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info); > } > > bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &= > diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c > index b84f32263..ee24e9658 100644 > --- a/drivers/net/bonding/rte_eth_bond_pmd.c > +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > @@ -2234,6 +2234,11 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) > dev_info->max_rx_queues = max_nb_rx_queues; > dev_info->max_tx_queues = max_nb_tx_queues; > > + memcpy(&dev_info->default_rxconf, &internals->default_rxconf, > + sizeof(dev_info->default_rxconf)); > + memcpy(&dev_info->default_txconf, &internals->default_txconf, > + sizeof(dev_info->default_txconf)); > + > /** > * If dedicated hw queues enabled for link bonding device in LACP mode > * then we need to reduce the maximum number of data path queues by 1. > @@ -3054,6 +3059,11 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode) > /* Initially allow to choose any offload type */ > internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK; > > + memset(&internals->default_rxconf, 0, > + sizeof(internals->default_rxconf)); > + memset(&internals->default_txconf, 0, > + sizeof(internals->default_txconf)); > + > memset(internals->active_slaves, 0, sizeof(internals->active_slaves)); > memset(internals->slaves, 0, sizeof(internals->slaves)); > > diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h > index 43e0e448d..d12a0ebbe 100644 > --- a/drivers/net/bonding/rte_eth_bond_private.h > +++ b/drivers/net/bonding/rte_eth_bond_private.h > @@ -160,6 +160,9 @@ struct bond_dev_private { > /** Bit mask of RSS offloads, the bit offset also means flow type */ > uint64_t flow_type_rss_offloads; > > + struct rte_eth_rxconf default_rxconf; /**< Default RxQ conf. */ > + struct rte_eth_txconf default_txconf; /**< Default TxQ conf. */ > + > uint16_t reta_size; > struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 / > RTE_RETA_GROUP_SIZE]; > -- > 2.17.1 >