From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f47.google.com (mail-oi0-f47.google.com [209.85.218.47]) by dpdk.org (Postfix) with ESMTP id 7D8EAF60C for ; Wed, 11 Jan 2017 20:54:12 +0100 (CET) Received: by mail-oi0-f47.google.com with SMTP id j15so12342240oih.2 for ; Wed, 11 Jan 2017 11:54:12 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=iOAr7lo0kk1No5GOZanDSqwZuh+CjM/Ze6sqWgOFUxs=; b=ALXkEqXUVeOJyz6nQEKjWpPSdYPjc5nqqDVeyo65mbAOhJuOmXf/INUuxHDa1JfEwG ssK6fBfFvxyqy11arY2TvErxbbeTV1Bp/emAJjw5x2XrMwFUBej18XGRcbo9JR8iQrHu 3hYJKGkFzb75Gux/E7PwFVB+Cuw+04OAvys5GcflbhA3BA4259kRSGRIXZ7QY2p90aic RuBgKClTKr+PuVjWuK8YD5ZbP+b3CwHUWraw1GaSzP47TFfTA8tpwc+0wo+F2NrAJvYv 5j4dV/AKdGM0a0/tzDBUQK+IDjCJDQePJkcDG+j6LERPSWLBvBVgP707var9FPP91SPk dSnw== X-Gm-Message-State: AIkVDXIvrQqxUHmHMl5Qsp14hPmoNPkKsj5on81FLjeJjdFAf6OFn9TXnR7kGT1rlTLdbO5dEsoF5ncYM+DrfoeX X-Received: by 10.157.46.228 with SMTP id w91mr5148199ota.31.1484164451699; Wed, 11 Jan 2017 11:54:11 -0800 (PST) MIME-Version: 1.0 Received: by 10.202.198.211 with HTTP; Wed, 11 Jan 2017 11:54:11 -0800 (PST) In-Reply-To: <20161216082404.3377bfc6@xeon-e3> References: <20161216124851.2640-1-bmcfall@redhat.com> <20161216124851.2640-4-bmcfall@redhat.com> <20161216082404.3377bfc6@xeon-e3> From: Billy McFall Date: Wed, 11 Jan 2017 14:54:11 -0500 Message-ID: To: Stephen Hemminger Cc: thomas.monjalon@6wind.com, wenzhuo.lu@intel.com, dev@dpdk.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH 3/3] driver: vHost support to free consumed buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jan 2017 19:54:12 -0000 This new API is attempting to address two scenarios: 1) Application wants to reuse existing mbuf to avoid a packet copy (example: Flooding a packet to multiple ports). The application increments the reference count of the packet and then polls new API until the reference count for the given mbuf is decremented. 2) Application runs out of mbufs, or some application like pktgen finishes a run and is preparing for an additional run, calls API to free consumed packets so processing can continue. With the current design, the application calls the new API, if rval >= 0, assume mubfs are being freed and can call multiple times if need be (to either get enough mbufs to continue or to get the specific one freed). If rval < 0, take some other action, like make a copy of packet in the flooding case or whatever is currently done in the application today. If the default behavior is to return 0, the application can't take any additional actions. Submitting V2 of the patch with the rte_eth_tx_buffer_flush() call and associated parameters removed and to continue the discussion on new API or not. Thanks, Billy McFall On Fri, Dec 16, 2016 at 11:24 AM, Stephen Hemminger < stephen@networkplumber.org> wrote: > On Fri, 16 Dec 2016 07:48:51 -0500 > Billy McFall wrote: > > > Add support to the vHostdriver for the new API to force free consumed > > buffers on TX ring. vHost does not cache the mbufs so there is no work > > to do. > > > > Signed-off-by: Billy McFall > > --- > > drivers/net/vhost/rte_eth_vhost.c | 11 +++++++++++ > > 1 file changed, 11 insertions(+) > > > > diff --git a/drivers/net/vhost/rte_eth_vhost.c > b/drivers/net/vhost/rte_eth_vhost.c > > index 766d4ef..6493d56 100644 > > --- a/drivers/net/vhost/rte_eth_vhost.c > > +++ b/drivers/net/vhost/rte_eth_vhost.c > > @@ -939,6 +939,16 @@ eth_queue_release(void *q) > > } > > > > static int > > +eth_tx_done_cleanup(void *txq __rte_unused, uint32_t free_cnt > __rte_unused) > > +{ > > + /* > > + * vHost does not hang onto mbuf. eth_vhost_tx() copies packet data > > + * and releases mbuf, so nothing to cleanup. > > + */ > > + return 0; > > +} > > + > > +static int > > eth_link_update(struct rte_eth_dev *dev __rte_unused, > > int wait_to_complete __rte_unused) > > { > > @@ -979,6 +989,7 @@ static const struct eth_dev_ops ops = { > > .tx_queue_setup = eth_tx_queue_setup, > > .rx_queue_release = eth_queue_release, > > .tx_queue_release = eth_queue_release, > > + .tx_done_cleanup = eth_tx_done_cleanup, > > .link_update = eth_link_update, > > .stats_get = eth_stats_get, > > .stats_reset = eth_stats_reset, > > Rather than having to change every drive, why not make this the default > behavior? >