From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ot0-f178.google.com (mail-ot0-f178.google.com [74.125.82.178]) by dpdk.org (Postfix) with ESMTP id D5D8B36E for ; Fri, 24 Mar 2017 14:18:55 +0100 (CET) Received: by mail-ot0-f178.google.com with SMTP id y88so1396237ota.2 for ; Fri, 24 Mar 2017 06:18:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=E9aXwDLSwsP9C0X4rZLfhIpHbaU5rP7COKUqraQPxT8=; b=V/HlwUYrm/K2agcJ7ZF6Kg9sCyoqC4jyYDBiuT7jQtMkx0PNHb15pYFSkZAnveuICb xcXjUNTItlZsIk8p14m6JIYPEfK695T/MyhTFW/52FhtUzF/8ou68NO7YLAKSidIsNYR 4QhkEYTo6sDtHU6DT1SghJSh60qy+zy9T4DkL4ltYcIt8wPwsunpf2mB0x0flhNjSSXP mZi0ud0twakFmZ5MWWJdrRpILyOM9Xrb42mdIcKiHf3/lD6apiKR1ejGI8/l065+JOG6 gI0C59VqgnWnibcIotUqvO0soHSNJWM7t2M0xjx5pSwpsFJfU3xINonznPPfoeHfwQ8+ hadg== X-Gm-Message-State: AFeK/H38o5STjslklfQe987dIQFbDhDJ4LMolMWaGP2KpOtNnBRV+Yrjkwu/lR4FwlZC422AdxNN618Gh3iQRQS7 X-Received: by 10.157.37.35 with SMTP id k32mr4550464otb.163.1490361534832; Fri, 24 Mar 2017 06:18:54 -0700 (PDT) MIME-Version: 1.0 Received: by 10.202.104.38 with HTTP; Fri, 24 Mar 2017 06:18:54 -0700 (PDT) In-Reply-To: <20170324134634.3e764423@platinum> References: <20170309205119.28170-1-bmcfall@redhat.com> <20170315180226.5999-1-bmcfall@redhat.com> <20170315180226.5999-2-bmcfall@redhat.com> <20170323113716.57e27591@glumotte.dev.6wind.com> <20170324134634.3e764423@platinum> From: Billy McFall Date: Fri, 24 Mar 2017 09:18:54 -0400 Message-ID: To: Olivier Matz Cc: thomas.monjalon@6wind.com, wenzhuo.lu@intel.com, dev@dpdk.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH v7 1/3] ethdev: new API to free consumed buffers in Tx ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Mar 2017 13:18:56 -0000 On Fri, Mar 24, 2017 at 8:46 AM, Olivier Matz wrote: > Hi Billy, > > On Thu, 23 Mar 2017 09:32:14 -0400, Billy McFall > wrote: > > Thank you for your comments. See inline. > > > > On Thu, Mar 23, 2017 at 6:37 AM, Olivier MATZ > > wrote: > > > > > Hi Billy, > > > > > > On Wed, 15 Mar 2017 14:02:24 -0400, Billy McFall > > > wrote: > > > > Add a new API to force free consumed buffers on Tx ring. API will > return > > > > the number of packets freed (0-n) or error code if feature not > supported > > > > (-ENOTSUP) or input invalid (-ENODEV). > > > > > > > > Signed-off-by: Billy McFall > > > > --- > > > > doc/guides/conf.py | 7 +++++-- > > > > doc/guides/nics/features/default.ini | 4 +++- > > > > doc/guides/prog_guide/poll_mode_drv.rst | 28 > > > ++++++++++++++++++++++++++++ > > > > doc/guides/rel_notes/release_17_05.rst | 7 ++++++- > > > > lib/librte_ether/rte_ethdev.c | 14 ++++++++++++++ > > > > lib/librte_ether/rte_ethdev.h | 31 > > > +++++++++++++++++++++++++++++++ > > > > 6 files changed, 87 insertions(+), 4 deletions(-) > > > > > > > > > > [...] > > > > > > > --- a/doc/guides/prog_guide/poll_mode_drv.rst > > > > +++ b/doc/guides/prog_guide/poll_mode_drv.rst > > > > @@ -249,6 +249,34 @@ One descriptor in the TX ring is used as a > sentinel > > > to avoid a hardware race con > > > > > > > > When configuring for DCB operation, at port initialization, both > > > the number of transmit queues and the number of receive queues must be > set > > > to 128. > > > > > > > > +Free Tx mbuf on Demand > > > > +~~~~~~~~~~~~~~~~~~~~~~ > > > > + > > > > +Many of the drivers don't release the mbuf back to the mempool, or > > > local cache, immediately after the packet has been > > > > +transmitted. > > > > +Instead, they leave the mbuf in their Tx ring and either perform a > bulk > > > release when the ``tx_rs_thresh`` has been > > > > +crossed or free the mbuf when a slot in the Tx ring is needed. > > > > + > > > > +An application can request the driver to release used mbufs with the > > > ``rte_eth_tx_done_cleanup()`` API. > > > > +This API requests the driver to release mbufs that are no longer in > > > use, independent of whether or not the > > > > +``tx_rs_thresh`` has been crossed. > > > > +There are two scenarios when an application may want the mbuf > released > > > immediately: > > > > + > > > > +* When a given packet needs to be sent to multiple destination > > > interfaces (either for Layer 2 flooding or Layer 3 > > > > + multi-cast). > > > > + One option is to make a copy of the packet or a copy of the header > > > portion that needs to be manipulated. > > > > + A second option is to transmit the packet and then poll the > > > ``rte_eth_tx_done_cleanup()`` API until the reference > > > > + count on the packet is decremented. > > > > + Then the same packet can be transmitted to the next destination > > > interface. > > > > > > By reading this paragraph, it's not so clear to me that the packet > > > that will be transmitted on all interfaces will be different from > > > one port to another. > > > > > > Maybe it could be reworded to insist on that? > > > > > > > > What if I add the following sentence: > > > > Then the same packet can be transmitted to the next destination > interface. > > + The application is still responsible for managing any packet > > manipulations needed between the different destination > > + interfaces, but a packet copy can be avoided. > > looks good, thanks. > > > > > > > + > > > > +* If an application is designed to make multiple runs, like a packet > > > generator, and one run has completed. > > > > + The application may want to reset to a clean state. > > > > > > I'd reword into: > > > > > > Some applications are designed to make multiple runs, like a packet > > > generator. > > > Between each run, the application may want to reset to a clean state. > > > > > > What do you mean by "clean state"? All mbufs returned into the > mempools? > > > Why would a packet generator need that? For performance? > > > > > > Reworded as you suggested, then attempted to explain a 'clean state'. > > Also reworded the last sentence a little. > > > > + * Some applications are designed to make multiple runs, like a packet > > generator. > > + For performance reasons and consistency between runs, the application > > may want to reset back to an initial state > > + between each run, where all mbufs are returned to the mempool. > > + In this case, it can call the ``rte_eth_tx_done_cleanup()`` API for > > each destination interface it has been using > > + to request it to release of all its used mbufs. > > ok, looks clearer to me, thanks > > > > > Also, do we want to ensure that all packets are actually transmitted? > > > > > > > Added an additional sentence to indicate that this API doesn't manage > > whether or not the packet has been transmitted. > > > > Then the same packet can be transmitted to the next destination > interface. > > The application is still responsible for managing any packet > > manipulations needed between the different destination > > interface, but a packet copy can be avoided. > > + This API is independent of whether the packet was transmitted or > > dropped, only that the mbuf is no longer in use by > > + the interface. > > ok > > > > > Can we do that with this API or should we use another API like > > > rte_eth_tx_descriptor_status() [1] ? > > > > > > [1] http://dpdk.org/dev/patchwork/patch/21549/ > > > > > > I read through this patch. This API doesn't indicate if the packet was > > transmitted or dropped (I think that is what you were asking). This API > > could be used by the application to determine if the mbuf has been > > freed, as opposed to polling the rte_mbuf_refcnt_read() for a change > > in value. Did I miss your point? > > Maybe my question was not clear :) > Let me try to reword it. > > For a traffic generator use-case, a dummy algorithm may be: > > 1/ send packets in a loop until a condition is met (ex: packet count > reached) > 2/ call rte_eth_tx_done_cleanup() > 3/ read stats for report > > I think there is something missing between 1/ and 2/, to ensure that > all packets that were in the tx queue are processed (either transmitted > or dropped). If that's not the case, both steps 2/ and 3/ will not > behave as expected: > - all mbufs won't be returned to the pool > - statistics may be wrong > > Maybe a simple wait() could do the job. > Using a combination of rte_eth_tx_done_cleanup() + > rte_eth_tx_descriptor_status() > is probably also a solution. > > Do you confirm rte_eth_tx_done_cleanup() does not check that? > > Confirm. rte_eth_tx_done_cleanup() does not check that. In the flooding case, the applications is expected to poll rte_eth_tx_done_cleanup() until some condition is met, like ref_count of given packet is decremented. So on the packetGen case, the application would need to wait some time and/or call rte_eth_tx_descriptor_status() as you suggested. My original patch returned RTE_DONE (no more packets pending), RTE_PROCESSING (freed what I could but there are still packets in the queue) or -ERRNO for error. Then packets freed count was returned via a pointer in the param list. That would have solved what you are asking, but that was shot down as being overkill. Should I add another sentence to the packet generator bullet indicating that it is the application's job to make sure no more packets are pending? Like: In this case, it can call the ``rte_eth_tx_done_cleanup()`` API for each destination interface it has been using to request it to release of all its used mbufs. + It is the application's responsibility to ensure all packets have been processed by the destination interface. + Use rte_eth_tx_descriptor_status() to obtain the status of the transmit queue, Thanks > Olivier > -- *Billy McFall* SDN Group Office of Technology *Red Hat*