From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23EECA04F3; Fri, 3 Jan 2020 10:01:20 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 626F11C2B2; Fri, 3 Jan 2020 10:01:19 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id B64EE1BFF7 for ; Fri, 3 Jan 2020 10:01:17 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Jan 2020 01:01:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,390,1571727600"; d="scan'208";a="302191202" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by orsmga001.jf.intel.com with ESMTP; 03 Jan 2020 01:01:16 -0800 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 3 Jan 2020 01:01:16 -0800 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.30]) by shsmsx102.ccr.corp.intel.com ([169.254.2.202]) with mapi id 14.03.0439.000; Fri, 3 Jan 2020 17:01:14 +0800 From: "Di, ChenxuX" To: "Ananyev, Konstantin" , "dev@dpdk.org" CC: "Yang, Qiming" Thread-Topic: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers Thread-Index: AQHVvvUG2XKE65EZ4ECFZ4yLvAYxOqfSHGAAgATnLxA= Date: Fri, 3 Jan 2020 09:01:13 +0000 Message-ID: <3B926E44943CB04AA3A39AC16328CE39B9262D@SHSMSX101.ccr.corp.intel.com> References: <20191203055134.72874-1-chenxux.di@intel.com> <20191230093840.17701-1-chenxux.di@intel.com> <20191230093840.17701-4-chenxux.di@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, > -----Original Message----- > From: Ananyev, Konstantin > Sent: Monday, December 30, 2019 8:54 PM > To: Di, ChenxuX ; dev@dpdk.org > Cc: Yang, Qiming ; Di, ChenxuX > > Subject: RE: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers >=20 > Hi, >=20 > > Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup to > > force free consumed buffers on Tx ring. > > > > Signed-off-by: Chenxu Di > > --- > > drivers/net/ixgbe/ixgbe_ethdev.c | 2 + > > drivers/net/ixgbe/ixgbe_rxtx.c | 116 +++++++++++++++++++++++++++++++ > > drivers/net/ixgbe/ixgbe_rxtx.h | 2 + > > 3 files changed, 120 insertions(+) > > > > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c > > b/drivers/net/ixgbe/ixgbe_ethdev.c > > index 2c6fd0f13..0091405db 100644 > > --- a/drivers/net/ixgbe/ixgbe_ethdev.c > > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c > > @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops > > =3D { .udp_tunnel_port_add =3D ixgbe_dev_udp_tunnel_port_add, > > .udp_tunnel_port_del =3D ixgbe_dev_udp_tunnel_port_del, > > .tm_ops_get =3D ixgbe_tm_ops_get, > > +.tx_done_cleanup =3D ixgbe_tx_done_cleanup, >=20 > Don't see how we can have one tx_done_cleanup() for different tx function= s? > Vector and scalar TX path use different format for sw_ring[] entries. > Also offload and simile TX paths use different method to track used/free > descriptors, and use different functions to free them: > offload uses tx_entry next_id, last_id plus txq. last_desc_cleaned, while= simple > TX paths use tx_next_dd. >=20 This patches will be not include function for Vector, and I will update my = code to Make it work for offload and simple . >=20 > > }; > > > > /* > > @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops > =3D { > > .reta_query =3D ixgbe_dev_rss_reta_query, > > .rss_hash_update =3D ixgbe_dev_rss_hash_update, > > .rss_hash_conf_get =3D ixgbe_dev_rss_hash_conf_get, > > +.tx_done_cleanup =3D ixgbe_tx_done_cleanup, > > }; > > > > /* store statistics names and its offset in stats structure */ diff > > --git a/drivers/net/ixgbe/ixgbe_rxtx.c > > b/drivers/net/ixgbe/ixgbe_rxtx.c index fa572d184..520b9c756 100644 > > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > > @@ -2306,6 +2306,122 @@ ixgbe_tx_queue_release_mbufs(struct > > ixgbe_tx_queue *txq) } } > > > > +int ixgbe_tx_done_cleanup(void *q, uint32_t free_cnt) >=20 > That seems to work only for offload(full) TX path (ixgbe_xmit_pkts). > Simple(fast) path seems not covered by this function. >=20 Same as above > > +{ > > +struct ixgbe_tx_queue *txq =3D (struct ixgbe_tx_queue *)q; struct > > +ixgbe_tx_entry *sw_ring; volatile union ixgbe_adv_tx_desc *txr; > > +uint16_t tx_first; /* First segment analyzed. */ > > +uint16_t tx_id; /* Current segment being processed. */ > > +uint16_t tx_last; /* Last segment in the current packet. */ uint16_t > > +tx_next; /* First segment of the next packet. */ int count; > > + > > +if (txq =3D=3D NULL) > > +return -ENODEV; > > + > > +count =3D 0; > > +sw_ring =3D txq->sw_ring; > > +txr =3D txq->tx_ring; > > + > > +/* > > + * tx_tail is the last sent packet on the sw_ring. Goto the end > > + * of that packet (the last segment in the packet chain) and > > + * then the next segment will be the start of the oldest segment > > + * in the sw_ring. >=20 > Not sure I understand the sentence above. > tx_tail is the value of TDT HW register (most recently armed by SW TD). > last_id is the index of last descriptor for multi-seg packet. > next_id is just the index of next descriptor in HW TD ring. > How do you conclude that it will be the ' oldest segment in the sw_ring'? >=20 The tx_tail is the last sent packet on the sw_ring. While the xmit_cleanup = or=20 Tx_free_bufs will be call when the nb_tx_free < tx_free_thresh . So the sw_ring[tx_tail].next_id must be the begin of mbufs which are not us= ed or Already freed . then begin the loop until the mbuf is used and begin to fr= ee them. > Another question why do you need to write your own functions? > Why can't you reuse existing ixgbe_xmit_cleanup() for full(offload) path = and > ixgbe_tx_free_bufs() for simple path? > Yes, ixgbe_xmit_cleanup() doesn't free mbufs, but at least it could be u= sed to > determine finished TX descriptors. > Based on that you can you can free appropriate sw_ring[] entries. >=20 The reason why I don't reuse existing function is that they all free severa= l mbufs=20 While the free_cnt of the API rte_eth_tx_done_cleanup() is the number of pa= ckets. It also need to be done that check which mbuffs are from the same packet. > >This is the first packet that will be > > + * attempted to be freed. > > + */ > > + > > +/* Get last segment in most recently added packet. */ tx_last =3D > > +sw_ring[txq->tx_tail].last_id; > > + > > +/* Get the next segment, which is the oldest segment in ring. */ > > +tx_first =3D sw_ring[tx_last].next_id; > > + > > +/* Set the current index to the first. */ tx_id =3D tx_first; > > + > > +/* > > + * Loop through each packet. For each packet, verify that an > > + * mbuf exists and that the last segment is free. If so, free > > + * it and move on. > > + */ > > +while (1) { > > +tx_last =3D sw_ring[tx_id].last_id; > > + > > +if (sw_ring[tx_last].mbuf) { > > +if (!(txr[tx_last].wb.status & > > +IXGBE_TXD_STAT_DD)) > > +break; > > + > > +/* Get the start of the next packet. */ tx_next =3D > > +sw_ring[tx_last].next_id; > > + > > +/* > > + * Loop through all segments in a > > + * packet. > > + */ > > +do { > > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf); > > +sw_ring[tx_id].mbuf =3D NULL; > > +sw_ring[tx_id].last_id =3D tx_id; > > + > > +/* Move to next segment. */ > > +tx_id =3D sw_ring[tx_id].next_id; > > + > > +} while (tx_id !=3D tx_next); > > + > > +/* > > + * Increment the number of packets > > + * freed. > > + */ > > +count++; > > + > > +if (unlikely(count =3D=3D (int)free_cnt)) break; } else { > > +/* > > + * There are multiple reasons to be here: > > + * 1) All the packets on the ring have been > > + * freed - tx_id is equal to tx_first > > + * and some packets have been freed. > > + * - Done, exit > > + * 2) Interfaces has not sent a rings worth of > > + * packets yet, so the segment after tail is > > + * still empty. Or a previous call to this > > + * function freed some of the segments but > > + * not all so there is a hole in the list. > > + * Hopefully this is a rare case. > > + * - Walk the list and find the next mbuf. If > > + * there isn't one, then done. > > + */ > > +if (likely(tx_id =3D=3D tx_first && count !=3D 0)) break; > > + > > +/* > > + * Walk the list and find the next mbuf, if any. > > + */ > > +do { > > +/* Move to next segment. */ > > +tx_id =3D sw_ring[tx_id].next_id; > > + > > +if (sw_ring[tx_id].mbuf) > > +break; > > + > > +} while (tx_id !=3D tx_first); > > + > > +/* > > + * Determine why previous loop bailed. If there > > + * is not an mbuf, done. > > + */ > > +if (sw_ring[tx_id].mbuf =3D=3D NULL) > > +break; > > +} > > +} > > + > > +return count; > > +} > > + > > static void __attribute__((cold)) > > ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq) { diff --git > > a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h > > index 505d344b9..2c3770af6 100644 > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h > > @@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct > > rte_eth_dev *dev); int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue > > *rxq); void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue > > *rxq); > > > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt); > > + > > extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX]; > > extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX]; > > > > -- > > 2.17.1 >=20