From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1987742D59; Mon, 26 Jun 2023 12:02:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9985741149; Mon, 26 Jun 2023 12:02:33 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 7D16B4067B for ; Mon, 26 Jun 2023 12:02:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1687773751; x=1719309751; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=d9jkD4nTXQZ2SMLqhocVTsvoBKEE84oOlXgbuP36nzE=; b=eR8Ka5qoM4QGg6BgjiemGUWGdA9xNw6hNQ0h+9KvPJNo0eqZZjuBuCB8 rGHk6SDMxqQJRe3kqsOay5OboVU3mbDKhEIBBJf7PibCukDZ7eBCmp4/8 sxZJIxJdX+8FR0ZQseEnbo0ftppsSJxSM0Twa3fxrknWPidwjAFX3ZDbd IpjWikcMTTX7dHfYQF2R9FD8iygPmK2FAMg6R8wKcRsgQzZyk+wUJ21FJ QOrcHOn4F6RiXlXtT+5XWWsaYafeffhthkJYLBg01k/f2AZ/gzBPpttQY PGY+/ptswmognwdow87JLrNyReK5Xfrxzo6ZwI8FC9BPj5L4OZy9cHz/g g==; X-IronPort-AV: E=McAfee;i="6600,9927,10752"; a="427224575" X-IronPort-AV: E=Sophos;i="6.01,159,1684825200"; d="scan'208";a="427224575" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2023 03:02:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10752"; a="840205627" X-IronPort-AV: E=Sophos;i="6.01,159,1684825200"; d="scan'208";a="840205627" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by orsmga004.jf.intel.com with ESMTP; 26 Jun 2023 03:02:29 -0700 Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Mon, 26 Jun 2023 03:02:29 -0700 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27 via Frontend Transport; Mon, 26 Jun 2023 03:02:29 -0700 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.102) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.23; Mon, 26 Jun 2023 03:02:26 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gyKGF01GCRtNa1be3Gkequ0ORN6My+Z28NKo2XhMpr8745FmEEpmvy9Q8Vl2gyRxlwVLc4tKdQj+9+G7QwC8IBeojTxMVAuBGtWygazpCGy3pCj6wDyYgIoNp33hC2V1QdvKDVoVYZgKeLgEvFHt68tEgaSGQSsjpeFlPby3OgN2zyxwli88DWxlZaHsAUNXA1MLxTKfvwQ8B6vLPZGhV7ShH5xjWSBCELrYn3DFLyGLZsH5X/wGy/hubraxm/oYNBcWVn6Dkx/OgIWMJqtYqhgJ0hc9s0xuHdP15MuOn+kAutARU/Pm/o+6eJ8NkWimbNlXROY2Ik+XHFip+8dFkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UTNuHmm1N03tdzA8uONWQ09//agSdNLCUo6fttU+XMk=; b=JIRrQCnZnN+byKTroNWJCqyKKlJdymULkzzzVugREaMSEiCXhQBEry+ycr032ZW3mldWsAxSkB1wfyUwvnKiDOC0KvFYN4eG8rXFqDq15oT/Lx5DHiWMvaudAYuPoGtQHh2RWl1DBocY3JDLaKZvnp1MVFA7iT3rE9Q7gt86CGuV0j5IIAIeZza4XhYOxPuT8JKonY7+ejNpzBaJTAUZDEdtSWY3Z6wYPneXQELqF+pzVcBM720yJ9H6d52L9y86eDeU4troYsmn/G041/m5keG2K6KJR2F6MJoInmmc8nVC2eAXVtKoXCbGd6c3EiMa4fZQtQsDmVDtpsaUeg0yrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from SN7PR11MB7019.namprd11.prod.outlook.com (2603:10b6:806:2ae::22) by IA1PR11MB7855.namprd11.prod.outlook.com (2603:10b6:208:3f8::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.23; Mon, 26 Jun 2023 10:02:24 +0000 Received: from SN7PR11MB7019.namprd11.prod.outlook.com ([fe80::2d39:426d:f529:6088]) by SN7PR11MB7019.namprd11.prod.outlook.com ([fe80::2d39:426d:f529:6088%4]) with mapi id 15.20.6521.023; Mon, 26 Jun 2023 10:02:24 +0000 From: "Jiang, Cheng1" To: Anoob Joseph , "thomas@monjalon.net" , "Richardson, Bruce" , "mb@smartsharesystems.com" , "Xia, Chenbo" , Amit Prakash Shukla , "huangdengdui@huawei.com" , "Laatz, Kevin" , "fengchengwen@huawei.com" , Jerin Jacob Kollanukkaran CC: "dev@dpdk.org" , "Hu, Jiayu" , "Ding, Xuan" , "Ma, WenwuX" , "Wang, YuanX" , "He, Xingguang" , "Ling, WeiX" Subject: RE: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf application Thread-Topic: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf application Thread-Index: AQHZo0kNpe8Gy+tjUE+NMkL9JTEUOK+X+ByAgAHScECAAtCiAIAARo1Q Date: Mon, 26 Jun 2023 10:02:23 +0000 Message-ID: References: <20230420072215.19069-1-cheng1.jiang@intel.com> <20230620065330.9999-1-cheng1.jiang@intel.com> In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: SN7PR11MB7019:EE_|IA1PR11MB7855:EE_ x-ms-office365-filtering-correlation-id: 22ba977c-f1b4-4c80-7d14-08db762c702d x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: R58iS6Vpa5oQSObkZWuGi9YdlxlQ4KrSVXSH/0Rtt1EY54ijI5wOe4w1fGC3RyGJhD5kC2E2Yh0tIM47z03ZrE0/MD3wgTzxjUup6bnY3vkUF3FCm7ZqT904D/XrffTsu8x3axmfifPrbyZqo0DyE5arhZMgtSu8j53uQh68K4Iw2k8T5dcxist8b+q6ga4+yhn6o6JTfdaJI5acRfXsTFtTiboO6Md2klJ/njVtk87n7lwruzm5WbDbXEA3FtkMhLl7VVXFgwqdu4PpXXpMP6kuJUhX34UFUqIs4HnGjVlNo88P3rtKU3XYElItMpqhUxqKIs1GckQq9BjSjIUiONpZ/l2feYlVw6vmGeyL0wtAP/RyJnylHT1OgMkHJ3vxOa30escAmtxhO/tzMIRbOT2Q2zFUU38H6LX6HTuY6CSioVnAtdjTTcHywBcDxoQcS2ufZlj9Pi48leqT3UGQGvowYjXqjx1TPSwrXJy6a2yCx1pjEAuCt2HISbqepeL8yTCsmTizyEfb1cL03JZjxhjaUMlO/1/u/hP/2dXjNa+y6jNS+hpYm57cyr2ArFqBkd3eiwxckuZB098cIbHKhnKKfESqnCQy3sdD0rQUfHQXbigDSvzLBlA5T5AyWXDjOOOYjAA/xfO7LJV8OwmehQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN7PR11MB7019.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(346002)(376002)(136003)(396003)(39860400002)(366004)(451199021)(8936002)(8676002)(64756008)(66446008)(55016003)(66476007)(66556008)(4326008)(76116006)(66946007)(41300700001)(316002)(6506007)(53546011)(186003)(9686003)(26005)(107886003)(478600001)(110136005)(54906003)(71200400001)(7696005)(2906002)(52536014)(5660300002)(30864003)(38070700005)(82960400001)(921005)(38100700002)(122000001)(33656002)(86362001)(66574015)(83380400001); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?Q?KhJBdQJ6BjrxqrmNxkWsSEwh7gRNFA2GiJq0B78P3G8XYtoN3wcxFiRKp4?= =?iso-8859-1?Q?YbKYl8+L9ymPf5ZErCZZes8YGPFiQv+hXtPgGnzC+ZmUK5dSil30Qcjjix?= =?iso-8859-1?Q?dt61VUDFzUSgUkBzVgFuellVOlDxyJy8XLb21xwNrg2jpRN7bCvXW3skiG?= =?iso-8859-1?Q?r9VEw+o0oYF/7BalKk29w2W4N76mPkm2HfCefPZEt6sElNPHlpgAZyuzFj?= =?iso-8859-1?Q?DLMyZJ+Q5Xmut+5WF8GTMGqvw55RBYLjdHxwQxyHC7O7+aU08FX6i2gULL?= =?iso-8859-1?Q?fXP8GmBGUGrS5ykgJT7zIBSfRqe/3sf3QmXW/on8Md7LpMOlxj7HfFup5t?= =?iso-8859-1?Q?GOB7c26nWSMg7c6cZBieZbNTbZw7CJ/hx8Cj4qex0O48QIao5VXaZD7MdH?= =?iso-8859-1?Q?mlcCH6+sukn3rl99yB+ieIPs9IBdFai6N0vEo0jaUUbfM/5kjJMzhQgSz9?= =?iso-8859-1?Q?RddoG6nSwAgDdph8pMmjr0L7pwOQJQijQL7qM6iJtW/SVZ16tiawd8/QgP?= =?iso-8859-1?Q?Z6EnzM2irhvfT022Ho+uWMkq3XEcubw3YBNOToroO1KfepnPPvuAUenQJN?= =?iso-8859-1?Q?JKrcj3P3ZrbDAcVQk98NYAdz8tE8ime09uc9IJH35O/ZOyNRZXNLVDjI7P?= =?iso-8859-1?Q?xQaTmVt+MYC2CI8wRW6aPT5ykkQtLVq++hGJ4ay1bKAroNecbN5Ax1XDeL?= =?iso-8859-1?Q?p+9FRg26235F/+g/biV3VWj5hM9ljk28IlNMPe9xJSNmfw2X5KX3Heifvc?= =?iso-8859-1?Q?NaM6aVbxR3TL0Pjoj1dogrJNOlGSSNDO6Ia2D7xbc6JG549vs1yG32jiyU?= =?iso-8859-1?Q?RFzwub5LrngJrCfSxsxuMGtz0QQFAXa3OtRp+lM0i4LUBWAjeBy8S+g+bl?= =?iso-8859-1?Q?TB3XCHwYdcimjKQVxQgjsLGqzc/E+OkC7945rv0pLHbMt1nAyAfZMXP1AM?= =?iso-8859-1?Q?lGYBi8B3F0a5ayXyQIR7SS4vuDzRIQjHpLzsF49EqhwUj72RisGBBJc8/b?= =?iso-8859-1?Q?KRBD5MKjuBr3qi4oHOj897rUEnX1oiAJNWkAfsLSccLsaR+tQpbDPDnO8c?= =?iso-8859-1?Q?C7exXYIb6Nyy+HcnACgQ5B/PZgydsioc8RJI7d8HtbqtHHFq3KwQdLOfax?= =?iso-8859-1?Q?uwoDOBVVoVQcqEgXyOnESxritrx5mhGjgvO8+GS15Qkec6yTiHtBjxXfCA?= =?iso-8859-1?Q?Yv4o3GS8nijUWZrhFAjkxQIqt0MGrjVmlVTW/E6JD+TYfLQNeq3QWTE+u8?= =?iso-8859-1?Q?RtA2zU4sZ9VUHoKcXXJtmBR0CtbQELex9J+hMSAeAknIGLZHeYhKX6042s?= =?iso-8859-1?Q?Q9PhgKiJ2M4FElW45VBpo9I1JviHDLkn5lNFWMUd4ADzar/LcZL8XI+83m?= =?iso-8859-1?Q?8+9jSUfq4mh9JsbfTYNQCZ3A/vaX9lfX8RMsFdb11C2IDZbWgjBZ6BxhAH?= =?iso-8859-1?Q?2Of/w+ucWhsoFjQWprQpdWPDXJ4VDsh8IW9j9eaKGfC8mHA0O+0GHfzCa4?= =?iso-8859-1?Q?sKWCctBuMHzE3H0Y/HenGKsZaVkB/JKSt2VxUJFTRFL30YR1DVfGsliJ60?= =?iso-8859-1?Q?9XUJ/GsOyaUe5KIFpTbvoQy6qWyUMTy39aoGPh2XgJOlgz0CW25GhZQfGG?= =?iso-8859-1?Q?YZj/iq1pD/ziiL6zeOZR1iXq6NyQ1/iV9G?= Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN7PR11MB7019.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 22ba977c-f1b4-4c80-7d14-08db762c702d X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Jun 2023 10:02:23.5197 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: kIwXQ0s93EPZ2xshQ+c8FRXXT2f1gTRNA8+W+py9hdoNV9irtWY7UWzo2EjryslNDnKoT6m+sAtHHZq6T376jw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR11MB7855 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Anoob, Replies are inline. Thanks, Cheng > -----Original Message----- > From: Anoob Joseph > Sent: Monday, June 26, 2023 1:42 PM > To: Jiang, Cheng1 ; thomas@monjalon.net; > Richardson, Bruce ; > mb@smartsharesystems.com; Xia, Chenbo ; Amit > Prakash Shukla ; huangdengdui@huawei.com; > Laatz, Kevin ; fengchengwen@huawei.com; Jerin > Jacob Kollanukkaran > Cc: dev@dpdk.org; Hu, Jiayu ; Ding, Xuan > ; Ma, WenwuX ; Wang, > YuanX ; He, Xingguang ; > Ling, WeiX > Subject: RE: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf applicatio= n >=20 > Hi Cheng, >=20 > Please see inline. >=20 > Thanks, > Anoob >=20 > > -----Original Message----- > > From: Jiang, Cheng1 > > Sent: Saturday, June 24, 2023 5:23 PM > > To: Anoob Joseph ; thomas@monjalon.net; > > Richardson, Bruce ; > > mb@smartsharesystems.com; Xia, Chenbo ; Amit > > Prakash Shukla ; > huangdengdui@huawei.com; > > Laatz, Kevin ; fengchengwen@huawei.com; Jerin > > Jacob Kollanukkaran > > Cc: dev@dpdk.org; Hu, Jiayu ; Ding, Xuan > > ; Ma, WenwuX ; Wang, > YuanX > > ; He, Xingguang ; Ling, > > WeiX > > Subject: RE: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf > > application > > > > Hi Anoob, > > > > Replies are inline. > > > > Thanks, > > Cheng > > > > > -----Original Message----- > > > From: Anoob Joseph > > > Sent: Friday, June 23, 2023 2:53 PM > > > To: Jiang, Cheng1 ; thomas@monjalon.net; > > > Richardson, Bruce ; > > > mb@smartsharesystems.com; Xia, Chenbo ; > Amit > > > Prakash Shukla ; > > huangdengdui@huawei.com; > > > Laatz, Kevin ; fengchengwen@huawei.com; Jerin > > > Jacob Kollanukkaran > > > Cc: dev@dpdk.org; Hu, Jiayu ; Ding, Xuan > > > ; Ma, WenwuX ; Wang, > > YuanX > > > ; He, Xingguang ; > > Ling, > > > WeiX > > > Subject: RE: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf > > > application > > > > > > Hi Cheng, > > > > > > Thanks for the new version. Please see inline. > > > > > > Thanks, > > > Anoob > > > > > > > -----Original Message----- > > > > From: Cheng Jiang > > > > Sent: Tuesday, June 20, 2023 12:24 PM > > > > To: thomas@monjalon.net; bruce.richardson@intel.com; > > > > mb@smartsharesystems.com; chenbo.xia@intel.com; Amit Prakash > > Shukla > > > > ; Anoob Joseph ; > > > > huangdengdui@huawei.com; kevin.laatz@intel.com; > > > > fengchengwen@huawei.com; Jerin Jacob Kollanukkaran > > > > > > > > Cc: dev@dpdk.org; jiayu.hu@intel.com; xuan.ding@intel.com; > > > > wenwux.ma@intel.com; yuanx.wang@intel.com; > > xingguang.he@intel.com; > > > > weix.ling@intel.com; Cheng Jiang > > > > Subject: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf > > > > application > > > > > > > > External Email > > > > > > > > ------------------------------------------------------------------ > > > > -- > > > > -- There are many high-performance DMA devices supported in DPDK > > > > now, > > > and > > > > these DMA devices can also be integrated into other modules of > > > > DPDK as accelerators, such as Vhost. Before integrating DMA into > > > > applications, developers need to know the performance of these DMA > > > > devices in various scenarios and the performance of CPUs in the > > > > same scenario, such as different buffer lengths. Only in this way > > > > can we know the target performance of the application accelerated > > > > by using them. This patch introduces a high-performance testing > > > > tool, which supports comparing the performance of CPU and DMA in > > > > different scenarios automatically with a pre- set config file. > > > > Memory Copy performance test > > > are supported for now. > > > > > > > > Signed-off-by: Cheng Jiang > > > > Signed-off-by: Jiayu Hu > > > > Signed-off-by: Yuan Wang > > > > Acked-by: Morten Br=F8rup > > > > Acked-by: Chenbo Xia > > > > --- > > > > v8: > > > > fixed string copy issue in parse_lcore(); > > > > improved some data display format; > > > > added doc in doc/guides/tools; > > > > updated release notes; > > > > > > > > v7: > > > > fixed some strcpy issues; > > > > removed cache setup in calling rte_pktmbuf_pool_create(); > > > > fixed some typos; > > > > added some memory free and null set operations; > > > > improved result calculation; > > > > v6: > > > > improved code based on Anoob's comments; > > > > fixed some code structure issues; > > > > v5: > > > > fixed some LONG_LINE warnings; > > > > v4: > > > > fixed inaccuracy of the memory footprint display; > > > > v3: > > > > fixed some typos; > > > > v2: > > > > added lcore/dmadev designation; > > > > added error case process; > > > > removed worker_threads parameter from config.ini; > > > > improved the logs; > > > > improved config file; > > > > > > > > app/meson.build | 1 + > > > > app/test-dma-perf/benchmark.c | 498 +++++++++++++++++++++ > > > > app/test-dma-perf/config.ini | 61 +++ > > > > app/test-dma-perf/main.c | 594 +++++++++++++++++++++= ++++ > > > > app/test-dma-perf/main.h | 69 +++ > > > > app/test-dma-perf/meson.build | 17 + > > > > doc/guides/rel_notes/release_23_07.rst | 6 + > > > > doc/guides/tools/dmaperf.rst | 103 +++++ > > > > doc/guides/tools/index.rst | 1 + > > > > 9 files changed, 1350 insertions(+) create mode 100644 > > > > app/test-dma-perf/benchmark.c create mode 100644 > > > > app/test-dma-perf/config.ini create mode 100644 app/test-dma- > > > > perf/main.c create mode 100644 app/test-dma-perf/main.h create > > > > mode > > > > 100644 app/test-dma-perf/meson.build create mode 100644 > > > > doc/guides/tools/dmaperf.rst > > > > > > > >=20 > >=20 > > > > > > > + fprintf(stderr, "Error: Fail to find DMA %s.\n", > > > > dma_name); > > > > + goto end; > > > > + } > > > > + > > > > + ldm->dma_ids[i] =3D dev_id; > > > > + configure_dmadev_queue(dev_id, ring_size); > > > > + ++nb_dmadevs; > > > > + } > > > > + > > > > +end: > > > > + if (nb_dmadevs < nb_workers) { > > > > + printf("Not enough dmadevs (%u) for all workers (%u).\n", > > > > nb_dmadevs, nb_workers); > > > > + return -1; > > > > + } > > > > + > > > > + printf("Number of used dmadevs: %u.\n", nb_dmadevs); > > > > + > > > > + return 0; > > > > +} > > > > + > > > > +static inline void > > > > +do_dma_submit_and_poll(uint16_t dev_id, uint64_t *async_cnt, > > > > + volatile struct worker_info *worker_info) { > > > > + int ret; > > > > + uint16_t nr_cpl; > > > > + > > > > + ret =3D rte_dma_submit(dev_id, 0); > > > > + if (ret < 0) { > > > > + rte_dma_stop(dev_id); > > > > + rte_dma_close(dev_id); > > > > + rte_exit(EXIT_FAILURE, "Error with dma submit.\n"); > > > > + } > > > > + > > > > + nr_cpl =3D rte_dma_completed(dev_id, 0, MAX_DMA_CPL_NB, NULL, > > > > NULL); > > > > + *async_cnt -=3D nr_cpl; > > > > + worker_info->total_cpl +=3D nr_cpl; } > > > > + > > > > +static inline int > > > > +do_dma_mem_copy(void *p) > > > > > > [Anoob] Just curious, why not pass struct lcore_params *para itself? > > > Is it because the pointer is volatile? If yes, then we can take an > > > AI to split the struct into volatile and non-volatile parts. > > > > [Cheng] The reason I did it this way is because I want to launch this > > function on another core by spawning a new thread, and > > rte_eal_remote_launch() takes a void * as the parameter. That's why I > > passed void *p. Your suggestion to split the struct into volatile and > > non-volatile parts is quite reasonable. I am thinking about the best wa= y to > implement it. Thanks. >=20 > [Anoob] Instead of passing the address of index variable as void *, you c= an > easily send lcore_params pointer, right? >=20 [Cheng] Yes, you are right. I can pass the lcore_params pointer. And I'll f= ix it in the next version. The new lcore_params will be non-volatile with v= olatile worker_info in it. This is more reasonable. > > > > > > > > > +{ > > > > + const uint16_t *para_idx =3D (uint16_t *)p; > > > > + volatile struct lcore_params *para =3D lcores_p[*para_idx].v_ptr; > > > > + volatile struct worker_info *worker_info =3D &(para->worker_info)= ; > > > > + const uint16_t dev_id =3D para->dev_id; > > > > + const uint32_t nr_buf =3D para->nr_buf; > > > > + const uint16_t kick_batch =3D para->kick_batch; > > > > + const uint32_t buf_size =3D para->buf_size; > > > > + struct rte_mbuf **srcs =3D para->srcs; > > > > + struct rte_mbuf **dsts =3D para->dsts; > > > > + uint16_t nr_cpl; > > > > + uint64_t async_cnt =3D 0; > > > > + uint32_t i; > > > > + uint32_t poll_cnt =3D 0; > > > > + int ret; > > > > + > > > > + worker_info->stop_flag =3D false; > > > > + worker_info->ready_flag =3D true; > > > > + > > > > + while (!worker_info->start_flag) > > > > + ; > > > > + > > > > + while (1) { > > > > + for (i =3D 0; i < nr_buf; i++) { > > > > +dma_copy: > > > > + ret =3D rte_dma_copy(dev_id, 0, > > > > rte_pktmbuf_iova(srcs[i]), > > > > + rte_pktmbuf_iova(dsts[i]), buf_size, 0); > > > > > > [Anoob] Do we need to use ' rte_mbuf_data_iova' here instead of > > > 'rte_pktmbuf_iova'? > > > > [Cheng] yes rte_mbuf_data_iova is more appropriate, I'll fix it in the > > next version. Thanks. > > > > > > > > > + if (unlikely(ret < 0)) { > > > > + if (ret =3D=3D -ENOSPC) { > > > > + do_dma_submit_and_poll(dev_id, > > > > &async_cnt, worker_info); > > > > + goto dma_copy; > > > > + } else { > > > > + /* Error exit */ > > > > + rte_dma_stop(dev_id); > > > > > > [Anoob] Missing rte_dma_close() here. Also, may be introduce a > > > static void function so that rte_exit call etc won't be part of the f= astpath > loop. > > > > > > May be something like below and you can call it from here and > > > "do_dma_submit_and_poll". > > > > > > static void > > > error_exit(int dev_id) > > > { > > > /* Error exit */ > > > rte_dma_stop(dev_id); > > > rte_dma_close(dev_id); > > > rte_exit(EXIT_FAILURE, "DMA enqueue failed\n"); } > > > > > > > [Cheng] I'm not so sure here. rte_dma_close() is called in the > > rte_exit(). Do we still call it explicitly before rte_exit()? >=20 > [Anoob] In ' do_dma_submit_and_poll', there is rte_dma_close() before > rte_exit(). I'm fine either way as long is it is consistent. Said that, I= think it is > better to call close() from app, rather than relying on rte_exit. >=20 [Cheng] sure, it makes sense to me that app should call rte_dma_close(), I'= ll fix it in the next version. Thanks. >