From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f53.google.com (mail-wm0-f53.google.com [74.125.82.53]) by dpdk.org (Postfix) with ESMTP id 51ABAFB52 for ; Tue, 20 Dec 2016 12:27:55 +0100 (CET) Received: by mail-wm0-f53.google.com with SMTP id f82so127592128wmf.1 for ; Tue, 20 Dec 2016 03:27:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=XGjrlWlvF2b/v2DT5c9ncNeTImjTSxwPGSjIbgk96Ps=; b=pXZWml/nGfzC+BQ/4CRBRgG46L89utbzSBVCK84XefOvoVaR1bJ9rPdrXmL1zsdfCL uqG1h/164+OZ45iVFMzrwrMey9U9SB1lejC/AOcGkjkAj98X1u6Ib5KozxDDq8aijZLk poIaRW+zi1xa1d0QYOx5m634mCxLnMsKRfkYWjk2rl4GU7/rCXe+yPipKv8BKBXnKxSs L5FSUVjYllHCRNtNJYJLK8rh4H+I9XI6IBpEHgwQSY4YNfggj98/ceTagO/NmYo8tZWc 11Tzmv8/FQ/TW47SdLiFTksvBufibQ9aDzqpAq4xrZKdE7zQ5Hwjb4BCyuMkpMaqvzi1 zUsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=XGjrlWlvF2b/v2DT5c9ncNeTImjTSxwPGSjIbgk96Ps=; b=UWk1pkdeHp6qsjBmEgL8UcFUc4Z+L4sa6R08/BGjaNg2uL/IQRocWvQQEVu3QIBG+S j6x7t5NDXdxvjBsDK0Ump5n/7VRdZdklheVabdIORXTGyaxtcKEw2fpCyUIF9V9hXNqh +zmWiMA5BKcUk4uY1NTzeER0SDrL8OtTC4f3f7FASpyV7iDN5UonIRWot8P6EbG1bNaz LJBsBoJw0/GqXi5//3Cuk49f2RKFfLThf7/PHtVClHH/FW/QmuO1nMDi0IVKgB2yHI8P 1NDYbt7aku6xFqHWgQ3PFVarpU4vj15cyzrY9bnS6YGW3FZYnSO6XWnOMjdcImTkdFyK SiAA== X-Gm-Message-State: AIkVDXLiuZ4+E1yHjhyF63GYqoYa03MGTSgbp252CEfwj9s6wCiTdsArDETs8XKPWwEmv4Wz X-Received: by 10.28.8.202 with SMTP id 193mr1403326wmi.101.1482233274920; Tue, 20 Dec 2016 03:27:54 -0800 (PST) Received: from 6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id l67sm21532640wmf.20.2016.12.20.03.27.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Dec 2016 03:27:54 -0800 (PST) Date: Tue, 20 Dec 2016 12:27:46 +0100 From: Adrien Mazarguil To: Billy McFall Cc: thomas.monjalon@6wind.com, wenzhuo.lu@intel.com, dev@dpdk.org, Stephen Hemminger Message-ID: <20161220112746.GT10340@6wind.com> References: <20161216124851.2640-1-bmcfall@redhat.com> <20161216124851.2640-2-bmcfall@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161216124851.2640-2-bmcfall@redhat.com> Subject: Re: [dpdk-dev] [PATCH 1/3] ethdev: New API to free consumed buffers in TX ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Dec 2016 11:27:55 -0000 Hi Billy, On Fri, Dec 16, 2016 at 07:48:49AM -0500, Billy McFall wrote: > Add a new API to force free consumed buffers on TX ring. API will return > the number of packets freed (0-n) or error code if feature not supported > (-ENOTSUP) or input invalid (-ENODEV). > > Because rte_eth_tx_buffer() may be used, and mbufs may still be held > in local buffer, the API also accepts *buffer and *sent. Before > attempting to free, rte_eth_tx_buffer_flush() is called to make sure > all mbufs are sent to Tx ring. rte_eth_tx_buffer_flush() is called even > if threshold is not met. > > Signed-off-by: Billy McFall > --- > lib/librte_ether/rte_ethdev.h | 56 +++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 56 insertions(+) > > diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h > index 9678179..e3f2be4 100644 > --- a/lib/librte_ether/rte_ethdev.h > +++ b/lib/librte_ether/rte_ethdev.h > @@ -1150,6 +1150,9 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev, > typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset); > /**< @internal Check DD bit of specific RX descriptor */ > > +typedef int (*eth_tx_done_cleanup_t)(void *txq, uint32_t free_cnt); > +/**< @internal Force mbufs to be from TX ring. */ > + > typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev, > uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo); > > @@ -1467,6 +1470,7 @@ struct eth_dev_ops { > eth_rx_disable_intr_t rx_queue_intr_disable; > eth_tx_queue_setup_t tx_queue_setup;/**< Set up device TX queue.*/ > eth_queue_release_t tx_queue_release;/**< Release TX queue.*/ > + eth_tx_done_cleanup_t tx_done_cleanup;/**< Free tx ring mbufs */ > eth_dev_led_on_t dev_led_on; /**< Turn on LED. */ > eth_dev_led_off_t dev_led_off; /**< Turn off LED. */ > flow_ctrl_get_t flow_ctrl_get; /**< Get flow control. */ > @@ -2943,6 +2947,58 @@ rte_eth_tx_buffer(uint8_t port_id, uint16_t queue_id, > } > > /** > + * Request the driver to free mbufs currently cached by the driver. The > + * driver will only free the mbuf if it is no longer in use. > + * > + * @param port_id > + * The port identifier of the Ethernet device. > + * @param queue_id > + * The index of the transmit queue through which output packets must be > + * sent. > + * The value must be in the range [0, nb_tx_queue - 1] previously supplied > + * to rte_eth_dev_configure(). > + * @param free_cnt > + * Maximum number of packets to free. Use 0 to indicate all possible packets > + * should be freed. Note that a packet may be using multiple mbufs. > + * @param buffer > + * Buffer used to collect packets to be sent. If provided, the buffer will > + * be flushed, even if the current length is less than buffer->size. Pass NULL > + * if buffer has already been flushed. > + * @param sent > + * Pointer to return number of packets sent if buffer has packets to be sent. > + * If *buffer is supplied, *sent must also be supplied. > + * @return > + * Failure: < 0 > + * -ENODEV: Invalid interface > + * -ENOTSUP: Driver does not support function > + * Success: >= 0 > + * 0-n: Number of packets freed. More packets may still remain in ring that > + * are in use. > + */ > + > +static inline int > +rte_eth_tx_done_cleanup(uint8_t port_id, uint16_t queue_id, uint32_t free_cnt, > + struct rte_eth_dev_tx_buffer *buffer, uint16_t *sent) > +{ > + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > + > + /* Validate Input Data. Bail if not valid or not supported. */ > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_done_cleanup, -ENOTSUP); > + > + /* > + * If transmit buffer is provided and there are still packets to be > + * sent, then send them before attempting to free pending mbufs. > + */ > + if (buffer && sent) > + *sent = rte_eth_tx_buffer_flush(port_id, queue_id, buffer); > + > + /* Call driver to free pending mbufs. */ > + return (*dev->dev_ops->tx_done_cleanup)(dev->data->tx_queues[queue_id], > + free_cnt); > +} > + > +/** > * Configure a callback for buffered packets which cannot be sent > * > * Register a specific callback to be called when an attempt is made to send Just a thought to follow-up on Stephen's comment to further simplify this API, how about not adding any new eth_dev_ops but instead defining what should happen during an empty TX burst call (tx_burst() with 0 packets). Several PMDs already have a check for this scenario and start by cleaning up completed packets anyway, they effectively partially implement this definition for free already. The main difference with this API would be that you wouldn't know how many mbufs were freed and wouldn't collect them into an array. However most applications have one mbuf pool and/or know where they come from, so they can just query the pool or attempt to re-allocate from it after doing empty bursts in case of starvation. [1] http://dpdk.org/ml/archives/dev/2016-December/052469.html -- Adrien Mazarguil 6WIND