From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <3chas3@gmail.com> Received: from mail-io0-f175.google.com (mail-io0-f175.google.com [209.85.223.175]) by dpdk.org (Postfix) with ESMTP id 6ED5E1B682 for ; Tue, 30 Jan 2018 17:15:48 +0100 (CET) Received: by mail-io0-f175.google.com with SMTP id c17so12030021iod.1 for ; Tue, 30 Jan 2018 08:15:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=E3k4iz7b5iShmTLkrQmg6nBHWgLEvq1vxDw0z+HwNZM=; b=Sj/2e9hQLs5SY/jRBkBXoxd13Wcg8saZP279LytYF8tCa0xmuZO9qM/AAmqEwLgTFg s9Dd9mG8zyjY2TOvUU4oTVPqmatU0tZs++sA2z56QqOrS0mtWV/fUtKrtGpcgSKIs3Em yREvDGDn9Wq/QftocghyCihJUmDzf+O4SNfD8nIZ1rGEcZi/ibrnKwktyBT7AfuLvPG8 zPZ9zNHchpnDP+V6QjM7kOQOu3/F/vDQq3ib9SAI1X09WbPYfFebQNkEoDv3f9eOkzy4 hyaIxXiOVqWJmgUzohW4AzIrretaseP3o9CYiiOZPTM7cNd9ocW9CqJ4CxAH/iGJAGoQ 10Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=E3k4iz7b5iShmTLkrQmg6nBHWgLEvq1vxDw0z+HwNZM=; b=dEtFynRO+5aRPWx6/XZ68XuDjDdhllS+0uZwpJ9dySoi8ymblILbvWMXZ0zTgNqCIZ WJzS0AwB6MNjB7IAq0jdIOAXfFesWumZ+yhQ4p5xi0ULzXvkMm/8udwr+46j9RLOGvP2 2qBcxEW5ql+PljGTik+P7qL/MGajmAA8wJ4nDTUBbdqvBuFX4+xs/EBdmUCV6bTxU127 Qg/qX+e1mY9XCr/XUCTme16aJ1BJdFOe/Lt9Bgc3KndjEjto+s4qSKsCUBIZq7O/PKzE q53XWOyQmdMWxAtj+Y3vJKKHhzP8U3+gpHHbxeCYKLpTW+FX7SE1CqVMXIKaQGivzNj0 uVEA== X-Gm-Message-State: AKwxytdtpPGf5AGqPyY135UiM5GJlPTJWRWhQ9WMpRJWjJjy9F00btg2 qiXkbGQvSskLxxb7lLQmk5zrP0UGXZRGZZm2ntw= X-Google-Smtp-Source: AH8x226gagPGyOLewanijmZ8LVcfLiqAC8R4HbNVI48qTJoYaM2nvjp3Jwl8ocqS03FgxYaoraXO4kKLC8fht/4Qja4= X-Received: by 10.107.201.9 with SMTP id z9mr29299559iof.223.1517328947715; Tue, 30 Jan 2018 08:15:47 -0800 (PST) MIME-Version: 1.0 Received: by 10.107.200.5 with HTTP; Tue, 30 Jan 2018 08:15:47 -0800 (PST) In-Reply-To: <2601191342CEEE43887BDE71AB97725890565AE3@IRSMSX103.ger.corp.intel.com> References: <20180129152115.26359-1-3chas3@gmail.com> <2601191342CEEE43887BDE71AB977258905653D7@IRSMSX103.ger.corp.intel.com> <2601191342CEEE43887BDE71AB97725890565545@IRSMSX103.ger.corp.intel.com> <2601191342CEEE43887BDE71AB97725890565566@IRSMSX103.ger.corp.intel.com> <2601191342CEEE43887BDE71AB97725890565AC3@IRSMSX103.ger.corp.intel.com> <2601191342CEEE43887BDE71AB97725890565AE3@IRSMSX103.ger.corp.intel.com> From: Chas Williams <3chas3@gmail.com> Date: Tue, 30 Jan 2018 11:15:47 -0500 Message-ID: To: "Ananyev, Konstantin" Cc: "dev@dpdk.org" , "Charles (Chas) Williams" , "Lu, Wenzhuo" Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH] net/ixgbe: fix reconfiguration of rx queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Jan 2018 16:15:48 -0000 On Tue, Jan 30, 2018 at 8:14 AM, Ananyev, Konstantin < konstantin.ananyev@intel.com> wrote: > > > > > > > > > > From: "Charles (Chas) Williams" > > > > > > > > .dev_configure() may be called again after RX queues have been setup. > > > > This has the effect of clearing whatever setting the RX queues made > for > > > > rx_bulk_alloc_allowed or rx_vec_allowed. Only reset this > configuration > > > > is there aren't any currently allocated queues. > > > > > > > > Fixes: 01fa1d6215fa ("ixgbe: unify Rx setup") > > > > > > > > Signed-off-by: Chas Williams > > > > --- > > > > drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++++++-- > > > > 1 file changed, 16 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c > b/drivers/net/ixgbe/ixgbe_ethdev.c > > > > index 37eb668..b39249a 100644 > > > > --- a/drivers/net/ixgbe/ixgbe_ethdev.c > > > > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c > > > > @@ -2348,6 +2348,7 @@ ixgbe_dev_configure(struct rte_eth_dev *dev) > > > > struct ixgbe_adapter *adapter = > > > > (struct ixgbe_adapter *)dev->data->dev_private; > > > > int ret; > > > > + uint16_t i; > > > > > > > > PMD_INIT_FUNC_TRACE(); > > > > /* multipe queue mode checking */ > > > > @@ -2363,11 +2364,17 @@ ixgbe_dev_configure(struct rte_eth_dev *dev) > > > > > > > > /* > > > > * Initialize to TRUE. If any of Rx queues doesn't meet the > bulk > > > > - * allocation or vector Rx preconditions we will reset it. > > > > + * allocation or vector Rx preconditions we will reset it. We > > > > + * can only do this is there aren't any existing RX queues. > > > > */ > > > > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > > > > + if (dev->data->rx_queues[i]) > > > > + goto out; > > > > + } > > > I don't see why this is needed. > > > It surely should be possible to reconfigure device with different > > > number of queues. > > > Konstantin > > > > > > Yes, you can add new queues but you shouldn't reset the bulk and vec > settings > > > that have already been chosen by the previously allocated queues. > > Why is that? Might be in new queue setup user will change settings? > > > > There is no requirement that the user allocates all the RX queues in the > same way. > > Some could have a different numbers of descs which is one of the checks > in > > check_rx_burst_bulk_alloc_preconditions() > > Exactly. That's why after dev_configure() user has to call queue_setup() > for *all* > queues he plans to use. > Where the API or documentation does it say that this is necessary? If this was a requirement then rte_eth_dev_configure() should drop every allocated queue. Since it doesn't do this I can only assume that you are allowed to keep using queues after calling rte_eth_dev_configure() without having to set them up again. > > > > > > > > > > If those queues > > > set rx_bulk_alloc_allowed to be false, then this is going to cause an > issue with queue > > > release later on. > > > > Could you be a bit more specific here: > > What you think will be broken in ixgbe_rx_queue_release() in that case? > > > > Sorry, I mispoke. It's this function, ixgbe_reset_rx_queue(), > > > > /* > > * By default, the Rx queue setup function allocates enough > memory for > > * IXGBE_MAX_RING_DESC. The Rx Burst bulk allocation function > requires > > * extra memory at the end of the descriptor ring to be zero'd > out. > > */ > > if (adapter->rx_bulk_alloc_allowed) > > /* zero out extra memory */ > > len += RTE_PMD_IXGBE_RX_MAX_BURST; > > > > /* > > * Zero out HW ring memory. Zero out extra memory at the end of > > * the H/W ring so look-ahead logic in Rx Burst bulk alloc > function > > * reads extra memory as zeros. > > */ > > for (i = 0; i < len; i++) { > > rxq->rx_ring[i] = zeroed_desc; > > } > > > > So you potentially write past the rx_ring[] you allocated. > > We always allocate rx_ring[] to maximum possible size plus space for fake > descriptors: > > drivers/net/ixgbe/ixgbe_rxtx.h: > #define RX_RING_SZ ((IXGBE_MAX_RING_DESC + RTE_PMD_IXGBE_RX_MAX_BURST) * \ > sizeof(union ixgbe_adv_rx_desc)) > > then at ixgbe_dev_rx_queue_setup(...): > > /* > * Allocate RX ring hardware descriptors. A memzone large enough to > * handle the maximum ring size is allocated in order to allow for > * resizing in later calls to the queue setup function. > */ > rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, > RX_RING_SZ, IXGBE_ALIGN, socket_id); > ... > rxq->rx_ring = (union ixgbe_adv_rx_desc *) rz->addr; > What about here? len = nb_desc; if (adapter->rx_bulk_alloc_allowed) len += RTE_PMD_IXGBE_RX_MAX_BURST; rxq->sw_ring = rte_zmalloc_socket("rxq->sw_ring", sizeof(struct ixgbe_rx_entry) * len, RTE_CACHE_LINE_SIZE, socket_id); This is later walked and reset in ixgbe_reset_rx_queue() if (adapter->rx_bulk_alloc_allowed) /* zero out extra memory */ len += RTE_PMD_IXGBE_RX_MAX_BURST; /* * Zero out HW ring memory. Zero out extra memory at the end of * the H/W ring so look-ahead logic in Rx Burst bulk alloc function * reads extra memory as zeros. */ for (i = 0; i < len; i++) { rxq->rx_ring[i] = zeroed_desc; } /* * initialize extra software ring entries. Space for these extra * entries is always allocated */ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf)); for (i = rxq->nb_rx_desc; i < len; ++i) { rxq->sw_ring[i].mbuf = &rxq->fake_mbuf; } Clearly, rx_bulk_alloc_allowed must remain consistent here. > > > > > > You can't change rx_bulk_alloc_allowed once it has been set and you have > allocated queues. > > In fact, you can't really let some later queue change this setting after > the first queue decides > > what itshould be. > > See above. > Again, I have pointed out where this is a problem. > > > > > > > > > > This breaks: > > > > > > rte_eth_dev_configure(..., 1, 1, ...); > > > rx_queue_setup(1) > > > [rx_queue_setup decides that it can't support rx_bulk_alloc_allowed] > > > .. > > > > > > Later, you want to add some more queues, you call > > > > > > eth_eth_dev_configure(..., 2, 2, ...); > > > > After you call dev_configure, you'll have to do queue_setup() for all > your queues. > > dev_configure() can changes some global device settings, so each queue > has to be > > reconfigured. > > In your example It should be: > > eth_eth_dev_configure(..., 2, 2, ...); > > rx_queue_setup(...,0, ...); > > rx_queue_setup(...,1, ...); > > > > Konstantin > > > > > > Nothing in the API says this must happen. If there where true, > shouldn't rte_eth_dev_configure() > > automatically drop any existing queues? > > One of the fists things that ixgbe_dev_rx_queue_setup() - calls > ixgbe_rx_queue_release(). > In theory that probably could be moved to rte_eth_dev_configure(), but I > think current model is ok too. > > >There is existing code that doesn't do this. > > Then I think it is a bug in that code. > > > Show me > > in the API where I am required to setup all my queues _again_ after > calling rte_eth_dev_configure() > > If you think the documentation doesn't explain that properly - feel free > to raise a bug agains doc > and/or provide a patch for it. > As I explained above - dev_configure() might change some global settings > (max_rx_pkt, flow-director settings, etc.) that would affect each queue > in that device. > So queue_setup() is necessary after dev_configure() to make sure it will > operate properly. > If you think the documentation is unclear, then *you* feel free to open a bug to clarify what you believe is the correct behavior. Yes, rte_eth_dev_configure() is free to change globals. Any of the API routines are free to change globals. It is up to your PMD to deal with the fallout from changing such globals. Bonding and vmxnet3 are just two examples of this. The "final" queue "setup" isn't done until .dev_start(). At that point, the queues are updated. Again, there are existing examples in the existing code base that do not follow your idea of what the API should do. Are they wrong as well? The preponderance of the evidence seems to be against your idea of what the API should be doing. Queue setup is an expensive operation. If some PMD doesn't need to drop and setup queues again after an rte_eth_dev_configure() why are you forcing it to do so? It was the choice of the author in the ixgbe driver to use globals the way they did. The current code is still somewhat wrong even after the patch I propose. The configuration of each queue depends on what the previous queues did. Since there isn't a burst per queue routine, there needs to be a single decision made after _all_ of the queues are configured. That is the only time when you have complete knowledge to make your decisions about which RX mode to use. > > Konstantin > > > > > > > > rx_queue_setup(2) > > > [rx_queue_setup hopefully makes the same choice as rxqid = 1?] > > > ... > > > > > > Is one supposed to release all queues before calling > rte_eth_dev_configure()? If > > > that is true, it seems like the change_mtu examples I see are possibly > wrong. As > > > suggested in kenrel_nic_interface.rst: > > > > > > > > > ret = rte_eth_dev_configure(port_id, 1, 1, &conf); > > > if (ret < 0) { > > > RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", > port_id); > > > return ret; > > > } > > > > > > /* Restart specific port */ > > > > > > ret = rte_eth_dev_start(port_id); > > > if (ret < 0) { > > > RTE_LOG(ERR, APP, "Fail to restart port %d\n", port_id); > > > return ret; > > > } > > > > > > This is will obviously reset the rx_bulk_alloc_allowed and not > reallocated the RX queues. > > > > > > > > > > adapter->rx_bulk_alloc_allowed = true; > > > > adapter->rx_vec_allowed = true; > > > > > > > > +out: > > > > return 0; > > > > } > > > > > > > > @@ -4959,6 +4966,7 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev) > > > > struct rte_eth_conf *conf = &dev->data->dev_conf; > > > > struct ixgbe_adapter *adapter = > > > > (struct ixgbe_adapter *)dev->data->dev_private; > > > > + uint16_t i; > > > > > > > > PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d", > > > > dev->data->port_id); > > > > @@ -4981,11 +4989,17 @@ ixgbevf_dev_configure(struct rte_eth_dev > *dev) > > > > > > > > /* > > > > * Initialize to TRUE. If any of Rx queues doesn't meet the > bulk > > > > - * allocation or vector Rx preconditions we will reset it. > > > > + * allocation or vector Rx preconditions we will reset it. We > > > > + * can only do this is there aren't any existing RX queues. > > > > */ > > > > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > > > > + if (dev->data->rx_queues[i]) > > > > + goto out; > > > > + } > > > > adapter->rx_bulk_alloc_allowed = true; > > > > adapter->rx_vec_allowed = true; > > > > > > > > +out: > > > > return 0; > > > > } > > > > > > > > -- > > > > 2.9.5 > >