From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 40575A04F9; Fri, 10 Jan 2020 13:46:44 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 71A841EA01; Fri, 10 Jan 2020 13:46:43 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id C57E81E9FE for ; Fri, 10 Jan 2020 13:46:40 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jan 2020 04:46:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,416,1571727600"; d="scan'208";a="218060934" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by fmsmga007.fm.intel.com with ESMTP; 10 Jan 2020 04:46:38 -0800 Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 10 Jan 2020 04:46:37 -0800 Received: from FMSEDG002.ED.cps.intel.com (10.1.192.134) by FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 10 Jan 2020 04:46:37 -0800 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.169) by edgegateway.intel.com (192.55.55.69) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 10 Jan 2020 04:46:37 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iCJXt6N6Kv8yfzTzKYdFo4jFElwXMYyXMRC8m+vPwXfZTnC2PrsXt1kWcfBD27w3xm46LuPYMQAJE7W0ctRw34FOwmyRNL8FI1ROxS9gSDCwT+1MX/U0wyuL+PkfVus6mET1BF4OkJ7dwyayDGniks8iU6wLjk2Y2NjXS80tH16ao7EJnmZG8aKBoRwbf8qZ8pa0R8sZg81DIDZgR4lLMiJBh9r1TK/5aaP81tClpfdO1pp2xZbywDTu/ij/Lbk+Ktlpnq4HLMGsouY5pXRF3s6NELjrFX6wamVkjMGTS+JoZ6rToemkbTUSzrUMpTRtcaW1AMAZonjSuRKlk70x7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wFF/HXNnFjIcBVREEQKa8OE5mGLkx3oTsGNodrjz7Kk=; b=XLQk93r4teF1QTrHMOEQOu8LvUgjkp2H/BYxhDt7sSdPxuPVjZIFLbUzZC0XdXbSA/V1liFbufncX49vQq7kkab/GaduT0Pyj9U2l8JC4vkTP6gkKjaEkzyWpGlsEhJqWQyO4mlmONyZUKZOwZ+hItUCFVGH/WsrE3bNqcS4eK1yMs6AhTpnFCzJcivV10eA7LDfrNEs7FPBryhr3f6/r7LOYXxKv0kz0bVDScuqDeyKuJhT8HciTxiY+JU9D9aeOGra0i5Arq3mJBIORXNB7QdUfsldhgV6BayBk6RLdKPGWgA8+d0JgE3S8j+bVFvRkHM6Eej85tDXsSuZZ7tpGw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wFF/HXNnFjIcBVREEQKa8OE5mGLkx3oTsGNodrjz7Kk=; b=xEJrtAeAtOXAtIn6oEMYLOa16BzUar/lCfSF4IpOj+rV3MmAUUSvvfxhehN/yN5NbLrQKtCLAaj+inikRo0H3gc18VNKtoP4lXpBUz9G1Y1ZDTgm0UxyIQLEgJnm2YELqDhdsUkhTqZV3VtGutJ40nPc6HxhK+W5NWL7S+rfd20= Received: from SN6PR11MB2558.namprd11.prod.outlook.com (52.135.94.19) by SN6PR11MB2896.namprd11.prod.outlook.com (52.135.127.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2623.9; Fri, 10 Jan 2020 12:46:36 +0000 Received: from SN6PR11MB2558.namprd11.prod.outlook.com ([fe80::4d86:362a:13c3:8386]) by SN6PR11MB2558.namprd11.prod.outlook.com ([fe80::4d86:362a:13c3:8386%7]) with mapi id 15.20.2623.013; Fri, 10 Jan 2020 12:46:35 +0000 From: "Ananyev, Konstantin" To: "Di, ChenxuX" , "dev@dpdk.org" CC: "Yang, Qiming" Thread-Topic: [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers Thread-Index: AQHVxtkj58tmpxpY7E+zGHL9Yz8SdafiQBkggAFt8YCAACuVkA== Date: Fri, 10 Jan 2020 12:46:35 +0000 Message-ID: References: <20191203055134.72874-1-chenxux.di@intel.com> <20200109103822.89011-1-chenxux.di@intel.com> <20200109103822.89011-4-chenxux.di@intel.com> <3B926E44943CB04AA3A39AC16328CE39B95155@SHSMSX101.ccr.corp.intel.com> In-Reply-To: <3B926E44943CB04AA3A39AC16328CE39B95155@SHSMSX101.ccr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNDBlOGRiZTUtNmQ0Yy00ZTM4LWEwZmItYmI2OGYwZTVlNzAzIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiTjVFK3Zxd3BmYmhRSnFDUG5DS0EzODZJTjB1QlV4UW9FMXpORnpsRGFzcHBPN1JmQjl3ekNTNEFZMmZjYSt4aiJ9 dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.2.0.6 x-ctpclassification: CTP_NT authentication-results: spf=none (sender IP is ) smtp.mailfrom=konstantin.ananyev@intel.com; x-originating-ip: [192.198.151.162] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 92601149-5d3d-4eda-7b70-08d795cb20de x-ms-traffictypediagnostic: SN6PR11MB2896: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8882; x-forefront-prvs: 02788FF38E x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(396003)(376002)(39860400002)(346002)(136003)(366004)(189003)(199004)(13464003)(316002)(30864003)(81156014)(7696005)(55016002)(33656002)(107886003)(71200400001)(4326008)(110136005)(86362001)(52536014)(5660300002)(2906002)(186003)(81166006)(8676002)(53546011)(6506007)(76116006)(66946007)(66476007)(8936002)(66446008)(66556008)(64756008)(26005)(9686003)(478600001); DIR:OUT; SFP:1102; SCL:1; SRVR:SN6PR11MB2896; H:SN6PR11MB2558.namprd11.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: swoftFpB4pZjq5yaJ0e+9KhxLgq6I4usd8eSOFlW4H95L2ihlQe+iNDW6+NANNzVEA4wxmt81H1Il8+E2ca1UdzeXzerQCwCrqQ0YmKk/HpykhlkbYh5RCnMEdLdApC6Ur+wNtLDX3rzKCt8o5SBK+dn6Xg1Yi6SIvtJV5zVdW4dYjeqmOp0IDUs1k8CKpdXwVKbtn787fXZO+4Ea8Phm9H9wSAuvjR1J3B8zHFEhKFT8hNuivI/1yGCZwrilwJ30IsE6HShbVtBB7QoEzxE600M56hTV3SYf5qmAX+WdF+u831C711B25epKbmWrbeX8SV8xIxuaHD0GyiTBSxrPOAQ+Wb3ZuYQmfviQf4HZPJ0LwsLp2LaXCSeLltBHH7JO1528ywvgBW4B8IEKuxC6EomOLVxfHFahyVYHJpXISF7tZOH/385+gcxAfFyeoG7E0KqVCOwb+cv7LLowP+RzeTFPQjoG827xP1ytNSq+54Wh/UvommOTFs90HAuKpTd Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 92601149-5d3d-4eda-7b70-08d795cb20de X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jan 2020 12:46:35.7066 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: uWtqVQGDVCbbOrzmxE8y+6dxIykzPwamgKGTWJh3ysR1QDRx3TBMEvlfhybZIp2p3EQZxMid96CSBXuQZkPt7deQ1salMpzkQLKoTfKMfh8= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR11MB2896 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Chenxu, > hi, Konstantin >=20 > thanks for your opinion, I have fixed almost in new version patch except = one. >=20 > > -----Original Message----- > > From: Ananyev, Konstantin > > Sent: Thursday, January 9, 2020 10:02 PM > > To: Di, ChenxuX ; dev@dpdk.org > > Cc: Yang, Qiming ; Di, ChenxuX > > > > Subject: RE: [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers > > > > > > Hi Chenxu, > > > > Good progress wih _full_version, but still some issues remains I think. > > More comments inline. > > Konstantin > > > > > > > > Signed-off-by: Chenxu Di > > > --- > > > drivers/net/ixgbe/ixgbe_ethdev.c | 4 + > > > drivers/net/ixgbe/ixgbe_rxtx.c | 156 +++++++++++++++++++++++++++++= +- > > > drivers/net/ixgbe/ixgbe_rxtx.h | 10 ++ > > > 3 files changed, 169 insertions(+), 1 deletion(-) > > > > > > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c > > > b/drivers/net/ixgbe/ixgbe_ethdev.c > > > index 2c6fd0f13..668c36188 100644 > > > --- a/drivers/net/ixgbe/ixgbe_ethdev.c > > > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c > > > @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops > > > =3D { .udp_tunnel_port_add =3D ixgbe_dev_udp_tunnel_port_add, > > > .udp_tunnel_port_del =3D ixgbe_dev_udp_tunnel_port_del, > > > .tm_ops_get =3D ixgbe_tm_ops_get, > > > +.tx_done_cleanup =3D ixgbe_tx_done_cleanup, > > > }; > > > > > > /* > > > @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_o= ps > > =3D { > > > .reta_query =3D ixgbe_dev_rss_reta_query, > > > .rss_hash_update =3D ixgbe_dev_rss_hash_update, > > > .rss_hash_conf_get =3D ixgbe_dev_rss_hash_conf_get, > > > +.tx_done_cleanup =3D ixgbe_tx_done_cleanup, > > > }; > > > > > > /* store statistics names and its offset in stats structure */ @@ > > > -1101,6 +1103,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, > > > void *init_params __rte_unused) eth_dev->rx_pkt_burst =3D > > > &ixgbe_recv_pkts; eth_dev->tx_pkt_burst =3D &ixgbe_xmit_pkts; > > > eth_dev->tx_pkt_prepare =3D &ixgbe_prep_pkts; > > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar); > > > > > > /* > > > * For secondary processes, we don't initialise any further as > > > primary @@ -1580,6 +1583,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev > > > *eth_dev) eth_dev->dev_ops =3D &ixgbevf_eth_dev_ops; > > > eth_dev->rx_pkt_burst =3D &ixgbe_recv_pkts; eth_dev->tx_pkt_burst = =3D > > > &ixgbe_xmit_pkts; > > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar); > > > > > > /* for secondary processes, we don't initialise any further as prima= ry > > > * has already done this work. Only check we don't need a different > > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c > > > b/drivers/net/ixgbe/ixgbe_rxtx.c index fa572d184..122dae425 100644 > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > > > @@ -92,6 +92,8 @@ uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, > > struct rte_mbuf **tx_pkts, > > > uint16_t nb_pkts); > > > #endif > > > > > > +static ixgbe_tx_done_cleanup_t ixgbe_tx_done_cleanup_op; > > > > You can't have just one static variable here. > > There could be several ixgbe devices and they could be configured in a = different > > way. > > I.E. txpkt_burst() is per device, so tx_done_cleanup() also has to be p= er device. > > Probably the easiest way is to add new entry for tx_done_cleanup into s= truct > > ixgbe_txq_ops, and set it properly in ixgbe_set_tx_function(). > > > > > + > > > > > /**************************************************************** > > ***** > > > * > > > * TX functions > > > @@ -2306,6 +2308,152 @@ ixgbe_tx_queue_release_mbufs(struct > > > ixgbe_tx_queue *txq) } } > > > > > > +int > > > +ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t > > > +free_cnt) > > > > As a nit I would change _scalar to _full or so. > > > > > +{ > > > +uint32_t pkt_cnt; > > > +uint16_t i; > > > +uint16_t tx_last; > > > +uint16_t tx_id; > > > +uint16_t nb_tx_to_clean; > > > +uint16_t nb_tx_free_last; > > > +struct ixgbe_tx_entry *swr_ring =3D txq->sw_ring; > > > + > > > +/* Start free mbuf from the next of tx_tail */ tx_last =3D > > > +txq->tx_tail; tx_id =3D swr_ring[tx_last].next_id; > > > + > > > +if (txq->nb_tx_free =3D=3D 0) > > > +if (ixgbe_xmit_cleanup(txq)) > > > > > > As a nit it could be just if (ixgbe_set_tx_function && ixgbe_xmit_clean= up(txq)) > > > > > +return 0; > > > + > > > +nb_tx_to_clean =3D txq->nb_tx_free; > > > +nb_tx_free_last =3D txq->nb_tx_free; > > > +if (!free_cnt) > > > +free_cnt =3D txq->nb_tx_desc; > > > + > > > +/* Loop through swr_ring to count the amount of > > > + * freeable mubfs and packets. > > > + */ > > > +for (pkt_cnt =3D 0; pkt_cnt < free_cnt; ) { for (i =3D 0; i < > > > +nb_tx_to_clean && pkt_cnt < free_cnt && tx_id !=3D tx_last; i++) { i= f > > > +(swr_ring[tx_id].mbuf !=3D NULL) { > > > +rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf); > > > +swr_ring[tx_id].mbuf =3D NULL; > > > + > > > +/* > > > + * last segment in the packet, > > > + * increment packet count > > > + */ > > > +pkt_cnt +=3D (swr_ring[tx_id].last_id =3D=3D tx_id); } > > > + > > > +tx_id =3D swr_ring[tx_id].next_id; > > > +} > > > + > > > +if (tx_id =3D=3D tx_last || txq->tx_rs_thresh > > > +> txq->nb_tx_desc - txq->nb_tx_free) > > > > First condition (tx_id =3D=3D tx_last) is porbably redundant here. > > >=20 > I think it is necessary. The txq may transmit packets when the API called= . Nope it is not possible. All ethdev RX/TX API is not thread safe. It will be a race condition that most likely will cause either crash or mem= ory corruption. > So txq->nb_tx_free may be changed. >=20 > If (tx_id =3D=3D tx_last) , it will break the loop above and the function= should be done and return. > However if more than txq->tx_rs_thresh numbers packet send into txq whil= e function doing. > It will not return. And fall in endless loop >=20 > > > +break; > > > + > > > +if (pkt_cnt < free_cnt) { > > > +if (ixgbe_xmit_cleanup(txq)) > > > +break; > > > + > > > +nb_tx_to_clean =3D txq->nb_tx_free - nb_tx_free_last; nb_tx_free_las= t =3D > > > +txq->nb_tx_free; } } > > > + > > > +PMD_TX_FREE_LOG(DEBUG, > > > +"Free %u Packets successfully " > > > +"(port=3D%d queue=3D%d)", > > > +pkt_cnt, txq->port_id, txq->queue_id); > > > + > > > +return (int)pkt_cnt; > > > +} > > > + > > > +int > > > +ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused, > > > +uint32_t free_cnt __rte_unused) { return -ENOTSUP; } > > > + > > > +int > > > +ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, uint32_t > > > +free_cnt) { uint16_t i; uint16_t tx_first; uint16_t tx_id; uint32_t > > > +pkt_cnt; struct ixgbe_tx_entry *swr_ring =3D txq->sw_ring; > > > > > > Looks overcomplicated here. > > TX simple (and vec) doesn't support mulsti-seg packets, So one TXD - on= e mbuf, > > and one packet. > > And ixgbe_tx_free_bufs() always retunrs/frees either 0 or tx_rs_thresh > > mbufs/packets. > > So it probably can be something like that: > > > > ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, uint32_t free= _cnt) { > > If (free_cnt =3D=3D 0) > > free_cnt =3D txq->nb_desc; > > > > cnt =3D free_cnt - free_cnt % txq->tx_rs_thesh; > > for (i =3D 0; i < cnt; i+=3D n) { > > n =3D ixgbe_tx_free_bufs(txq); > > if (n =3D=3D 0) > > break; > > } > > return i; > > } > > > > > + > > > +/* Start free mbuf from tx_first */ > > > +tx_first =3D txq->tx_next_dd - (txq->tx_rs_thresh - 1); tx_id =3D > > > +tx_first; > > > + > > > +/* while free_cnt is 0, > > > + * suppose one mbuf per packet, > > > + * try to free packets as many as possible */ if (free_cnt =3D=3D 0= ) > > > +free_cnt =3D txq->nb_tx_desc; > > > + > > > +/* Loop through swr_ring to count freeable packets */ for (pkt_cnt = =3D > > > +0; pkt_cnt < free_cnt; ) { if (txq->nb_tx_desc - txq->nb_tx_free < > > > +txq->tx_rs_thresh) break; > > > + > > > +if (!ixgbe_tx_free_bufs(txq)) > > > +break; > > > + > > > +for (i =3D 0; i !=3D txq->tx_rs_thresh && tx_id !=3D tx_first; i++) = { > > > +/* last segment in the packet, > > > + * increment packet count > > > + */ > > > +pkt_cnt +=3D (tx_id =3D=3D swr_ring[tx_id].last_id); tx_id =3D > > > +swr_ring[tx_id].next_id; } > > > + > > > +if (tx_id =3D=3D tx_first) > > > +break; > > > +} > > > + > > > +PMD_TX_FREE_LOG(DEBUG, > > > +"Free %u packets successfully " > > > +"(port=3D%d queue=3D%d)", > > > +pkt_cnt, txq->port_id, txq->queue_id); > > > + > > > +return (int)pkt_cnt; > > > +} > > > + > > > +int > > > +ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt) { > > > +ixgbe_tx_done_cleanup_t func =3D ixgbe_get_tx_done_cleanup_func(); > > > + > > > +if (!func) > > > +return -ENOTSUP; > > > + > > > +return func(txq, free_cnt); > > > +} > > > + > > > +void > > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn) { > > > +ixgbe_tx_done_cleanup_op =3D fn; } > > > + > > > +ixgbe_tx_done_cleanup_t > > > +ixgbe_get_tx_done_cleanup_func(void) > > > +{ > > > +return ixgbe_tx_done_cleanup_op; > > > +} > > > + > > > static void __attribute__((cold)) > > > ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq) { @@ -2398,9 > > > +2546,14 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct > > > ixgbe_tx_queue *txq) > > > ixgbe_txq_vec_setup(txq) =3D=3D 0)) { > > > PMD_INIT_LOG(DEBUG, "Vector tx enabled."); dev->tx_pkt_burst =3D > > > ixgbe_xmit_pkts_vec; -} else > > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_vec); > > > +} else { > > > #endif > > > dev->tx_pkt_burst =3D ixgbe_xmit_pkts_simple; > > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_simple); > > > +#ifdef RTE_IXGBE_INC_VECTOR > > > +} > > > +#endif > > > } else { > > > PMD_INIT_LOG(DEBUG, "Using full-featured tx code path"); > > > PMD_INIT_LOG(DEBUG, @@ -2412,6 +2565,7 @@ > > ixgbe_set_tx_function(struct > > > rte_eth_dev *dev, struct ixgbe_tx_queue *txq) (unsigned > > > long)RTE_PMD_IXGBE_TX_MAX_BURST); dev->tx_pkt_burst =3D > > > ixgbe_xmit_pkts; dev->tx_pkt_prepare =3D ixgbe_prep_pkts; > > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar); > > > } > > > } > > > > > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h > > > b/drivers/net/ixgbe/ixgbe_rxtx.h index 505d344b9..a52597aa9 100644 > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h > > > @@ -253,6 +253,8 @@ struct ixgbe_txq_ops { > > > IXGBE_ADVTXD_DCMD_DEXT |\ > > > IXGBE_ADVTXD_DCMD_EOP) > > > > > > +typedef int (*ixgbe_tx_done_cleanup_t)(struct ixgbe_tx_queue *txq, > > > +uint32_t free_cnt); > > > > > > /* Takes an ethdev and a queue and sets up the tx function to be use= d based > > on > > > * the queue parameters. Used in tx_queue_setup by primary process > > > and then @@ -285,6 +287,14 @@ int > > > ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev); int > > > ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq); void > > > ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq); > > > > > > +void ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn); > > > +ixgbe_tx_done_cleanup_t ixgbe_get_tx_done_cleanup_func(void); > > > + > > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt); int > > > +ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t > > > +free_cnt); int ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq, > > > +uint32_t free_cnt); int ixgbe_tx_done_cleanup_simple(struct > > > +ixgbe_tx_queue *txq, uint32_t free_cnt); > > > + > > > extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX]; > > > extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX]; > > > > > > -- > > > 2.17.1 > > >=20