From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 0B315A04F2;
	Mon, 30 Dec 2019 13:54:10 +0100 (CET)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 5CEB01BF80;
	Mon, 30 Dec 2019 13:54:09 +0100 (CET)
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by dpdk.org (Postfix) with ESMTP id A79281C0DC
 for <dev@dpdk.org>; Mon, 30 Dec 2019 13:54:06 +0100 (CET)
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 30 Dec 2019 04:54:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.69,375,1571727600"; d="scan'208";a="269640745"
Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206])
 by FMSMGA003.fm.intel.com with ESMTP; 30 Dec 2019 04:54:04 -0800
Received: from fmsmsx163.amr.corp.intel.com (10.18.125.72) by
 FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Mon, 30 Dec 2019 04:54:04 -0800
Received: from FMSEDG002.ED.cps.intel.com (10.1.192.134) by
 fmsmsx163.amr.corp.intel.com (10.18.125.72) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Mon, 30 Dec 2019 04:54:04 -0800
Received: from NAM04-BN3-obe.outbound.protection.outlook.com (104.47.46.54) by
 edgegateway.intel.com (192.55.55.69) with Microsoft SMTP Server
 (TLS) id 14.3.439.0; Mon, 30 Dec 2019 04:54:04 -0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i6mcIdifbZFhliXTys5OrTYrGPdxHZU8WmmL5ZOxd/RmSPOZLok35Jj8cfoQXDTGw/FWXBnuHBRmNxlsawEwFOJeEkpyofgT+yw+/fKbKQ7GVprzFGNVPFakEgtKr2t3pkdhhSCVCXtPVTJYcvWaM0EK24bSPSypOu10Aa8P63xwHyXSvTrjIsYyuAMfVVTFO99svFWdD7RcJYY0v9nZKIBMuYY0nd9klrAdaQaGucuzvCsAAVkCOzALmhjSvhm6Ko91IOUaL8TRu+jMUEa337m8OUsl92MFmRuQZT7FkyyZvr9qsOor+mzHoA6Q5xaENSx9qlIDMesv6rd/kArJJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6VW8TMvLWAg38uieVkDhBmQXQglhxzMGiqz9nx4ukh4=;
 b=ITwXsHVXr4xUMGMFX8WujetUi9GimP2cNOeY3XwdwPJQUQ0vkbPJ0kIFbOgV1hhIAHQ35gceWb3VHCk0CPWtUCMNJmWZ2gZPd1AuC5keTd9uIAnRzAU7lWM+EWhqCtVhEJDa+37FJ9Wx6G3sHNfNJD/QgvQcwvrYevNcNGTWrGoc3Y/l/iOO+AXIjWiFXuDoawAvl8ur7Kr3Gn3kioYfDYiAAQ5hKEbM5EEVudKeJG95AZl/p02dwCvqBUJNNZMiz9pvXAOz7+n53IvUA7zU9/LRMc4/ZoqCDj8Hi6pIJteRr7nggfk1RjZ66Zj7/Y0/7qbASks3NVbGVfWxSpbSEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6VW8TMvLWAg38uieVkDhBmQXQglhxzMGiqz9nx4ukh4=;
 b=qRuzfq0Qo3twbt86Wc04JhQkyL3mkq5CGC0Oj4tKSCsy0cM+KbNTMQrtfjusSgoH2ndEUEZZuIZNrv/v+Q/R3LdcOL28gfdXe0SK1YmJRtKFd3NqkywYqcSuLuDVucc+ltK+Af6F52ZMO2OSEyIyP1L7Qznk81PIBpof3hRHSIU=
Received: from SN6PR11MB2558.namprd11.prod.outlook.com (52.135.94.19) by
 SN6PR11MB2574.namprd11.prod.outlook.com (52.135.92.14) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2581.12; Mon, 30 Dec 2019 12:53:58 +0000
Received: from SN6PR11MB2558.namprd11.prod.outlook.com
 ([fe80::4d86:362a:13c3:8386]) by SN6PR11MB2558.namprd11.prod.outlook.com
 ([fe80::4d86:362a:13c3:8386%7]) with mapi id 15.20.2581.007; Mon, 30 Dec 2019
 12:53:58 +0000
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "Di, ChenxuX" <chenxux.di@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
CC: "Yang, Qiming" <qiming.yang@intel.com>, "Di, ChenxuX"
 <chenxux.di@intel.com>
Thread-Topic: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
Thread-Index: AQHVvvUp6LlAIfvCjkGBZ/4yGRUggKfSnF5Q
Date: Mon, 30 Dec 2019 12:53:58 +0000
Message-ID: <SN6PR11MB2558D21A9E6008741FE793079A270@SN6PR11MB2558.namprd11.prod.outlook.com>
References: <20191203055134.72874-1-chenxux.di@intel.com>
 <20191230093840.17701-1-chenxux.di@intel.com>
 <20191230093840.17701-4-chenxux.di@intel.com>
In-Reply-To: <20191230093840.17701-4-chenxux.di@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMmUyYzFkMjgtYTVkYi00M2ZhLWJjYWMtZjE5MWJmMzE2ZTA2IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiaEplQ2dsbUx2REFDVk5BUFZjQXpkZG5mOG9WWFBRUjhtTkxhRmswQytiOHpndWFsSU1EWk95WEpWWGh0ZHl3WCJ9
dlp-product: dlpe-windows
dlp-reaction: no-action
dlp-version: 11.2.0.6
x-ctpclassification: CTP_NT
authentication-results: spf=none (sender IP is )
 smtp.mailfrom=konstantin.ananyev@intel.com; 
x-originating-ip: [46.7.38.224]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 56de96ec-4995-4e62-2679-08d78d27561e
x-ms-traffictypediagnostic: SN6PR11MB2574:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <SN6PR11MB2574730269BE69CDA595356F9A270@SN6PR11MB2574.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-forefront-prvs: 0267E514F9
x-forefront-antispam-report: SFV:NSPM;
 SFS:(10019020)(39860400002)(136003)(366004)(346002)(376002)(396003)(189003)(199004)(107886003)(6506007)(81156014)(81166006)(26005)(7696005)(5660300002)(4326008)(86362001)(186003)(8936002)(52536014)(71200400001)(2906002)(8676002)(66446008)(55016002)(66476007)(64756008)(66556008)(316002)(66946007)(478600001)(54906003)(9686003)(76116006)(110136005)(33656002);
 DIR:OUT; SFP:1102; SCL:1; SRVR:SN6PR11MB2574;
 H:SN6PR11MB2558.namprd11.prod.outlook.com; FPR:; SPF:None; LANG:en;
 PTR:InfoNoRecords; MX:1; A:1; 
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: ToyTdicDMmS0c5Ca+QPKsH+HAp6VJSEFcSReI70sBfSDRm8uKHirAFELb5WUf4/laMC+yh6afcJs8Ul4bXILbzJpU4XDQIeBkpFAncIW81gV3tUZMPjK7ZrIq9Pf2heE9x9uavDR77oSnu7KOUo8r+oibLTv4iDVWB+iOKPyfWvZW6l5enHjB96KPUtrZ8qU+I1uTmjYgjrhlaHHwGVRw28oVDvHCfy3MPLFPrLK0wkPXXMwa647u+86kQUzMyc/sggqycHDeC92F0QbVzyNbZXTIBv5Uq39urmMFw2a4xq7dwIR7YYAFAOgbD+uTpSXZFAayOmoF9zX/Bcao97IfdtDZNv+hjQHtnHRP/rhvfkrn4lJh+XSOa1awGNATQcLJ27Xcb634X62ETv345XxgiIvaGyKQW/gkjgAHpsz8t5Cok9vpKAXznThxrL9a2dT
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 56de96ec-4995-4e62-2679-08d78d27561e
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Dec 2019 12:53:58.0674 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: stA7lF2EXlpVLIbERcNwgJHt33bx7NPztGtpetDTXSo1qL7MJnWf4raStiGbGd7Ag/L8KuUHoe0BGm1iga2rBQ44dUvmHLZAQgRq/+mKVZc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR11MB2574
X-OriginatorOrg: intel.com
Subject: Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Hi,

> Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
>=20
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
>  drivers/net/ixgbe/ixgbe_rxtx.c   | 116 +++++++++++++++++++++++++++++++
>  drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +
>  3 files changed, 120 insertions(+)
>=20
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_e=
thdev.c
> index 2c6fd0f13..0091405db 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops =3D=
 {
>  	.udp_tunnel_port_add  =3D ixgbe_dev_udp_tunnel_port_add,
>  	.udp_tunnel_port_del  =3D ixgbe_dev_udp_tunnel_port_del,
>  	.tm_ops_get           =3D ixgbe_tm_ops_get,
> +	.tx_done_cleanup      =3D ixgbe_tx_done_cleanup,

Don't see how we can have one tx_done_cleanup() for different tx functions?
Vector and scalar TX path use different  format for sw_ring[] entries.
Also offload and simile TX paths use different method to track used/free de=
scriptors,
and use different functions to free them:
offload uses tx_entry next_id, last_id plus txq. last_desc_cleaned, while
simple TX paths use tx_next_dd.=20


>  };
>=20
>  /*
> @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops =
=3D {
>  	.reta_query           =3D ixgbe_dev_rss_reta_query,
>  	.rss_hash_update      =3D ixgbe_dev_rss_hash_update,
>  	.rss_hash_conf_get    =3D ixgbe_dev_rss_hash_conf_get,
> +	.tx_done_cleanup      =3D ixgbe_tx_done_cleanup,
>  };
>=20
>  /* store statistics names and its offset in stats structure */
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxt=
x.c
> index fa572d184..520b9c756 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -2306,6 +2306,122 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queu=
e *txq)
>  	}
>  }
>=20
> +int ixgbe_tx_done_cleanup(void *q, uint32_t free_cnt)

That seems to work only for offload(full) TX path (ixgbe_xmit_pkts).
Simple(fast) path seems not covered by this function.

> +{
> +	struct ixgbe_tx_queue *txq =3D (struct ixgbe_tx_queue *)q;
> +	struct ixgbe_tx_entry *sw_ring;
> +	volatile union ixgbe_adv_tx_desc *txr;
> +	uint16_t tx_first; /* First segment analyzed. */
> +	uint16_t tx_id;    /* Current segment being processed. */
> +	uint16_t tx_last;  /* Last segment in the current packet. */
> +	uint16_t tx_next;  /* First segment of the next packet. */
> +	int count;
> +
> +	if (txq =3D=3D NULL)
> +		return -ENODEV;
> +
> +	count =3D 0;
> +	sw_ring =3D txq->sw_ring;
> +	txr =3D txq->tx_ring;
> +
> +	/*
> +	 * tx_tail is the last sent packet on the sw_ring. Goto the end
> +	 * of that packet (the last segment in the packet chain) and
> +	 * then the next segment will be the start of the oldest segment
> +	 * in the sw_ring.=20

Not sure I understand the sentence above.
tx_tail is the value of TDT HW register (most recently armed by SW TD).
last_id  is the index of last descriptor for multi-seg packet.
next_id is just the index of next descriptor in HW TD ring.
How do you conclude that it will be the ' oldest segment in the sw_ring'?

Another question why do you need to write your own functions?
Why can't you reuse existing ixgbe_xmit_cleanup() for full(offload) path
and  ixgbe_tx_free_bufs() for simple path?
Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least it could be use=
d
to determine finished TX descriptors.
Based on that you can you can free appropriate sw_ring[] entries.

>This is the first packet that will be
> +	 * attempted to be freed.
> +	 */
> +
> +	/* Get last segment in most recently added packet. */
> +	tx_last =3D sw_ring[txq->tx_tail].last_id;
> +
> +	/* Get the next segment, which is the oldest segment in ring. */
> +	tx_first =3D sw_ring[tx_last].next_id;
> +
> +	/* Set the current index to the first. */
> +	tx_id =3D tx_first;
> +
> +	/*
> +	 * Loop through each packet. For each packet, verify that an
> +	 * mbuf exists and that the last segment is free. If so, free
> +	 * it and move on.
> +	 */
> +	while (1) {
> +		tx_last =3D sw_ring[tx_id].last_id;
> +
> +		if (sw_ring[tx_last].mbuf) {
> +			if (!(txr[tx_last].wb.status &
> +				IXGBE_TXD_STAT_DD))
> +				break;
> +
> +			/* Get the start of the next packet. */
> +			tx_next =3D sw_ring[tx_last].next_id;
> +
> +			/*
> +			 * Loop through all segments in a
> +			 * packet.
> +			 */
> +			do {
> +				rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> +				sw_ring[tx_id].mbuf =3D NULL;
> +				sw_ring[tx_id].last_id =3D tx_id;
> +
> +				/* Move to next segment. */
> +				tx_id =3D sw_ring[tx_id].next_id;
> +
> +			} while (tx_id !=3D tx_next);
> +
> +			/*
> +			 * Increment the number of packets
> +			 * freed.
> +			 */
> +			count++;
> +
> +			if (unlikely(count =3D=3D (int)free_cnt))
> +				break;
> +		} else {
> +			/*
> +			 * There are multiple reasons to be here:
> +			 * 1) All the packets on the ring have been
> +			 *    freed - tx_id is equal to tx_first
> +			 *    and some packets have been freed.
> +			 *    - Done, exit
> +			 * 2) Interfaces has not sent a rings worth of
> +			 *    packets yet, so the segment after tail is
> +			 *    still empty. Or a previous call to this
> +			 *    function freed some of the segments but
> +			 *    not all so there is a hole in the list.
> +			 *    Hopefully this is a rare case.
> +			 *    - Walk the list and find the next mbuf. If
> +			 *      there isn't one, then done.
> +			 */
> +			if (likely(tx_id =3D=3D tx_first && count !=3D 0))
> +				break;
> +
> +			/*
> +			 * Walk the list and find the next mbuf, if any.
> +			 */
> +			do {
> +				/* Move to next segment. */
> +				tx_id =3D sw_ring[tx_id].next_id;
> +
> +				if (sw_ring[tx_id].mbuf)
> +					break;
> +
> +			} while (tx_id !=3D tx_first);
> +
> +			/*
> +			 * Determine why previous loop bailed. If there
> +			 * is not an mbuf, done.
> +			 */
> +			if (sw_ring[tx_id].mbuf =3D=3D NULL)
> +				break;
> +		}
> +	}
> +
> +	return count;
> +}
> +
>  static void __attribute__((cold))
>  ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
>  {
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxt=
x.h
> index 505d344b9..2c3770af6 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> @@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_=
eth_dev *dev);
>  int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
>  void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
>=20
> +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> +
>  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
>  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
>=20
> --
> 2.17.1