From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 167E7A0C43;
	Thu, 21 Oct 2021 21:29:15 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 5D6F6410F6;
	Thu, 21 Oct 2021 21:29:03 +0200 (CEST)
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1anam02on2075.outbound.protection.outlook.com [40.107.96.75])
 by mails.dpdk.org (Postfix) with ESMTP id 59AE84003E
 for <dev@dpdk.org>; Thu, 21 Oct 2021 20:12:30 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CB4nm3ci+cBwLxXAS8WA0Zf93uhzUG+HSIq7rNH+NblDa2RBsIsT/g4Ejt0zhmPkFfCAS2e8IaadCQimNJE3IbdcFqxViXuoSCk5HHRLvzUkoq7QQgboKaTWRMr0lzak9K8otkUIj76J5PWR/7TFeu2ydAQ58xIrl+g+4UQwoy/cx0Uv82CD7HYdzEOIPoJbc/A0f/NTKL1iC+Sv3Oz9w5RFat72cUAhQQK/civTKFBkGoGz4NMaXVu/VZyTeBaH2y87l5Z7WfbMSNLhaLX5gLI7pHeV3KEgIcbOg1aT207iedT4I++4ZSUpEg4D2orw0+TzJJJBBTlXbRbGU17pDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=f7iXzU/GrrrVvFakVwY2H/PQt0GeS037nb6OpWRocpE=;
 b=kuzKog224b9hGZSAd/16fb0g0E3ibv+PKRJJAfUpKM108tigSHYBFpirqyalXN+/37IyShJtZ+hLaOa5odAuXXgOKeoV4hozoLIiFq4yYQgXGgmyYwyeSAK2Ja4gcFaMtgdrC+JF+fkEUtldeqDwOgNO2bF5iIlYSCzFZSZQypi9gGOLQ0gNciLfRFgbrC/sYR9ruQTVDRLE7IPtgDo3+uIVkBAfNIxtMrtVYBQj2nLaFFV+9p3sPoV9fJm8B0jnGz/1H+byRkQsr5GmzlsEZkGAW88wXqqf3cqiPkGlRdnrvgqnscpLwChnK3nOsE3i09rfRHbiXhXYHjwudDx5Bg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f7iXzU/GrrrVvFakVwY2H/PQt0GeS037nb6OpWRocpE=;
 b=yEvJ2XBrXLej+11VGhVif7lKFliBTCqO/Eq/yIpW5lcItP59Z/QmQ2jQPXCDZR7GUhobeFrwn53Maj9kXzL+YK6cq1Al6GTBCg6kwi2mmBuRtVUQxvlNcUBPbzRPh1RjXsIwC5yXSd6hgYMTCuXLSQ5bCTYyDsVJrv7i3Voo0X8=
Received: from BY5PR12MB3681.namprd12.prod.outlook.com (2603:10b6:a03:194::16)
 by BYAPR12MB4728.namprd12.prod.outlook.com (2603:10b6:a03:a3::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.16; Thu, 21 Oct
 2021 18:12:26 +0000
Received: from BY5PR12MB3681.namprd12.prod.outlook.com
 ([fe80::1419:77f5:8c88:c53c]) by BY5PR12MB3681.namprd12.prod.outlook.com
 ([fe80::1419:77f5:8c88:c53c%7]) with mapi id 15.20.4628.018; Thu, 21 Oct 2021
 18:12:26 +0000
From: "Song, Keesang" <Keesang.Song@amd.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>, Thomas Monjalon
 <thomas@monjalon.net>, Aman Kumar <aman.kumar@vvdntech.in>
CC: "dev@dpdk.org" <dev@dpdk.org>, "rasland@nvidia.com" <rasland@nvidia.com>, 
 "asafp@nvidia.com" <asafp@nvidia.com>, "shys@nvidia.com" <shys@nvidia.com>,
 "viacheslavo@nvidia.com" <viacheslavo@nvidia.com>, "akozyrev@nvidia.com"
 <akozyrev@nvidia.com>, "matan@nvidia.com" <matan@nvidia.com>, "Burakov,
 Anatoly" <anatoly.burakov@intel.com>, "aman.kumar@vvdntech.in"
 <aman.kumar@vvdntech.in>, "jerinjacobk@gmail.com" <jerinjacobk@gmail.com>,
 "Richardson, Bruce" <bruce.richardson@intel.com>, "david.marchand@redhat.com"
 <david.marchand@redhat.com>
Thread-Topic: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy routine
 to eal
Thread-Index: AQHXxNbCgvZjpzZirEenQtfW+ZBt0qvaQKyAgANxrPCAAAlrgIAABxHA
Date: Thu, 21 Oct 2021 18:12:26 +0000
Message-ID: <BY5PR12MB36810A4EB4091460F3868FF996BF9@BY5PR12MB3681.namprd12.prod.outlook.com>
References: <20210823084411.29592-1-aman.kumar@vvdntech.in>
 <20211019104724.19416-1-aman.kumar@vvdntech.in> <1936725.1xRPv8SPVf@thomas>
 <BY5PR12MB36811F49FBBF85FBEAE0AE6296BF9@BY5PR12MB3681.namprd12.prod.outlook.com>
 <DM6PR11MB4491E77C9BB6BDD8343924D79ABF9@DM6PR11MB4491.namprd11.prod.outlook.com>
In-Reply-To: <DM6PR11MB4491E77C9BB6BDD8343924D79ABF9@DM6PR11MB4491.namprd11.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
msip_labels: MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_ActionId=02602cd9-ebef-46b4-8668-061eb608129c;
 MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_ContentBits=0;
 MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Enabled=true;
 MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Method=Standard;
 MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Name=AMD Official Use
 Only-AIP 2.0;
 MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_SetDate=2021-10-21T17:06:38Z; 
 MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d
authentication-results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=amd.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1b948c83-2de5-4872-cee8-08d994be56aa
x-ms-traffictypediagnostic: BYAPR12MB4728:
x-microsoft-antispam-prvs: <BYAPR12MB47281DEAC65479326FE43AFB96BF9@BYAPR12MB4728.namprd12.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5516;
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: g5jlgzr8JvFN1oEKWLkbVw3CxLwUXpNLTuEeADhMJrxY0P5mtIlmHLHEBaBtnoAmrKS23vCccZbtkHb6iAPcZ4Rr4tYnQq8Xj/0nk7NsuQ+lLuNQScRFnqsxF8CoG57UGSrSa7Vt7+14l8weiV0zt/UPMOLn43K0aIyZJpu899azn2svlw1Gw305QTtpd1jJUNLO5a2ZS1Spl/4YRI/Xm87CQjIhu+jLBEPIUms+8M6FWAtWqH2M+AqrtGjU6b10fBgQRJMVJnr1ZA7m2owpLT2bsSoPnXS6Qh/U//m6OJsWG7NTALOx/gvcbMs3OeQQ5uVMbCeqYaBg3yN+uOC16bjL11+Me1cKv324bUDwXndy16Ph3rjKSkRf5wZJ2h5iMyi+1bRROHhRQsdbxMQvt8FyaI2eqkNdXtPF/YLfniKucMJV585RI2SuBv6D5KHjy/7X2ykxAAdaQBhKR0EywNh6sAS5AOle/Oc5MUPuEaRJMscidcZii2iB4KNtKLUWu//boDcSb9zI0TEp/hG2A+wy4TioXXINxyMOGZh2dTQ7PrXRjbt1EavI1ood2HCT2y9UC7RfmtVFn3wrvtIRxt+UclK0gstBK23+VZ7Gyj2elzwtHYFTJXRG8CKp6BV7ose/UvQqAeRDDetPixERZovoKvxUk3m6ke/1mZg+fq2iAosC5kBpKHizR6H40+WKntAlEh9RtQnN25T3IiY2Iw==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:BY5PR12MB3681.namprd12.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(4636009)(366004)(33656002)(54906003)(316002)(86362001)(508600001)(8936002)(8676002)(7416002)(110136005)(2906002)(76116006)(64756008)(6506007)(66476007)(66446008)(9686003)(66946007)(52536014)(38070700005)(5660300002)(7696005)(122000001)(55016002)(30864003)(186003)(53546011)(83380400001)(38100700002)(66556008)(71200400001)(4326008)(26005);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?8Q6hBtexcqz8vYMe4BspMnA/cig1ITFjPD6JMlnIc4o5s/9Oxz8rEEEe89Sh?=
 =?us-ascii?Q?RJQhEFn1+Pd8nQznAyI45kgY8CFvCUdcuBCWNGis3CVpm10vT2WEY4Suossu?=
 =?us-ascii?Q?84zbRWakaSGJvsvhTi0foxmr3x6eJdGhZLZfU/KPb9cKXPlP2CPwx+B6egt5?=
 =?us-ascii?Q?EVhUi52B6i80TrEbXqGGIRRaOVYeGBYs/S3c7mduIYVptsvC024bXlK+bdCv?=
 =?us-ascii?Q?+7lBxdJtamruIlFJUKcE0tis2RLo/OHK06CK6Uf5bjrur3eKB6m1hSOR7pje?=
 =?us-ascii?Q?rG5236fnRwrYWZ8trmso3zMSLSxbiGF48xdDZbNcwwHQ65GTf0+Z9FqWuD4F?=
 =?us-ascii?Q?S4Nal94coNPbVdysOkIfFmHYqiP70AYRi8dKc6GD62UFEjzaoKXkt2h1wMHQ?=
 =?us-ascii?Q?/ClgVShKt1QefO2wb9WPW8mMbOLODwQIUdYJEUcg0ld+j0kTw/KIDfRchVxv?=
 =?us-ascii?Q?PhYuOSlou7jp9AhkmNqSYRZsxqMwZtUQqtt0SghRRp1x4RMl1QQ3+PJxKoTi?=
 =?us-ascii?Q?X9bmUwcJ1EAf4guGl5x70hiG4jNKkVgMgvKAqMLFPPxUicelSfwodURWsggf?=
 =?us-ascii?Q?iXqL5iZWLnBJiCGxpWfLGnQYvxlmcz3DPVDep47lil24gXQtGSwU01777zF3?=
 =?us-ascii?Q?E1hET3XNsB7Dfa9m6uz32dYtDngYeWAbXTlSpYNYG8Z9ZxZbFYUF0CiJx1Mf?=
 =?us-ascii?Q?TQjy8G3i4A2psJVcj0tbqbS1LHhp4xNzjjnwa80pw6g2DqSWyNgDB1bm4Hbu?=
 =?us-ascii?Q?ze9XMH0ivVTr3qtFygmte4ua+DgVNyyDgsi5g41uCqyX7dWLTbFylRm6ZvHE?=
 =?us-ascii?Q?zQQi3ktFhfN3btQFbNQ5B3y9kA7dw3jdwiq5oPdFNTcT2groFfU1cXgDQXDY?=
 =?us-ascii?Q?pZKRhgjX6vlPPdPD36188sKFHLEVNEwcpVotM9IPMcmEhyByPwr1JxVSoT+K?=
 =?us-ascii?Q?2IUZFS09lfDgWjbh6T5TWBeBl2HeYPYNEhDygp/lZ1aCpZGiHwEE5MgXT9BE?=
 =?us-ascii?Q?pwVu86m0q4gN9acQu3gh+37bP3Jy5iWxoUHECgiJhS2Ktj+Q5vdE10hJj4Gq?=
 =?us-ascii?Q?4ay5TQdVppph/gFYzJsE9Tpd4O6Vw8kp0TXP5y8okUFrhhP+djXVadJ/vw/A?=
 =?us-ascii?Q?n7CccMvj10Fhy31H+OCWqhHer/7fd6JdHID2/K6Y9SRlgVtSJjp+ywicIGTg?=
 =?us-ascii?Q?XXjJDW51k3UE6/rmLZvla7wUTcHhvwpxBw9/vWxLUUZCpz6zb0ilFaDFyFBL?=
 =?us-ascii?Q?f2G+2B/7Zz2ZK5EDWPPj9D6EVjqFEyiZNyeLPMX49AAO3rdlS7ueqXpGfsCe?=
 =?us-ascii?Q?kwau9rfM2dZmljBOkIy0yi/H?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BY5PR12MB3681.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b948c83-2de5-4872-cee8-08d994be56aa
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Oct 2021 18:12:26.7498 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: keessong@amd.com
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB4728
X-Mailman-Approved-At: Thu, 21 Oct 2021 21:29:00 +0200
Subject: Re: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy routine
 to eal
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

[AMD Official Use Only]

Hi Ananyev,

The current memcpy implementation in Glibc is based out of assembly coding.
Although memcpy could have been implemented with intrinsic, but since our A=
MD library developers are working on the Glibc functions, they have provide=
d a tailored implementation based out of inline assembly coding.

Thanks for your support,
Keesang

-----Original Message-----
From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
Sent: Thursday, October 21, 2021 10:40 AM
To: Song, Keesang <Keesang.Song@amd.com>; Thomas Monjalon <thomas@monjalon.=
net>; Aman Kumar <aman.kumar@vvdntech.in>
Cc: dev@dpdk.org; rasland@nvidia.com; asafp@nvidia.com; shys@nvidia.com; vi=
acheslavo@nvidia.com; akozyrev@nvidia.com; matan@nvidia.com; Burakov, Anato=
ly <anatoly.burakov@intel.com>; aman.kumar@vvdntech.in; jerinjacobk@gmail.c=
om; Richardson, Bruce <bruce.richardson@intel.com>; david.marchand@redhat.c=
om
Subject: RE: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy routin=
e to eal

[AMD Official Use Only]

[CAUTION: External Email]

>
> Hi Thomas,
>
> I hope this can make some explanation to your question.
> We(AMD Linux library support team) have implemented the custom
> tailored memcpy solution which is a close match with DPDK use case requir=
ements like the below.
> 1)      Min 64B length data packet with cache aligned Source and Destinat=
ion.
> 2)      Non-Temporal load and temporal store for cache aligned source for=
 both RX and TX paths. Could not implement the non-temporal
> store for TX_PATH, as non-Temporal load/stores works only with 32B aligne=
d addresses for AVX2
> 3)      This solution works for all AVX2 supported AMD machines.
>
> Internally we have completed the integrity testing and benchmarking of
> the solution and found gains of 8.4% to 14.5% specifically on Milan
> CPU(3rd Gen of EPYC Processor)

It still not clear to me why it has to be written in assembler.
Why similar stuff can't be written in C with instincts, as rest of rte_memc=
py.h does?

>
> Thanks for your support,
> Keesang
>
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, October 19, 2021 5:31 AM
> To: Aman Kumar <aman.kumar@vvdntech.in>
> Cc: dev@dpdk.org; rasland@nvidia.com; asafp@nvidia.com;
> shys@nvidia.com; viacheslavo@nvidia.com; akozyrev@nvidia.com;
> matan@nvidia.com; anatoly.burakov@intel.com; Song, Keesang
> <Keesang.Song@amd.com>; aman.kumar@vvdntech.in; jerinjacobk@gmail.com;
> bruce.richardson@intel.com; konstantin.ananyev@intel.com;
> david.marchand@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy
> routine to eal
>
> [CAUTION: External Email]
>
> 19/10/2021 12:47, Aman Kumar:
> > This patch provides rte_memcpy* calls optimized for AMD EPYC
> > platforms. Use config/x86/x86_amd_epyc_linux_gcc as cross-file with
> > meson to build dpdk for AMD EPYC platforms.
>
> Please split in 2 patches: platform & memcpy.
>
> What optimization is specific to EPYC?
>
> I dislike the asm code below.
> What is AMD specific inside?
> Can it use compiler intrinsics as it is done elsewhere?
>
> > +static __rte_always_inline void *
> > +rte_memcpy_aligned_ntload_tstore16_amdepyc2(void *dst,
> > +                                         const void *src,
> > +                                         size_t size) {
> > +     asm volatile goto("movq %0, %%rsi\n\t"
> > +     "movq %1, %%rdi\n\t"
> > +     "movq %2, %%rdx\n\t"
> > +     "cmpq   $(128), %%rdx\n\t"
> > +     "jb     202f\n\t"
> > +     "201:\n\t"
> > +     "vmovntdqa (%%rsi), %%ymm0\n\t"
> > +     "vmovntdqa 32(%%rsi), %%ymm1\n\t"
> > +     "vmovntdqa 64(%%rsi), %%ymm2\n\t"
> > +     "vmovntdqa 96(%%rsi), %%ymm3\n\t"
> > +     "vmovdqu  %%ymm0, (%%rdi)\n\t"
> > +     "vmovdqu  %%ymm1, 32(%%rdi)\n\t"
> > +     "vmovdqu  %%ymm2, 64(%%rdi)\n\t"
> > +     "vmovdqu  %%ymm3, 96(%%rdi)\n\t"
> > +     "addq   $128, %%rsi\n\t"
> > +     "addq   $128, %%rdi\n\t"
> > +     "subq   $128, %%rdx\n\t"
> > +     "jz     %l[done]\n\t"
> > +     "cmpq   $128, %%rdx\n\t" /*Vector Size 32B.  */
> > +     "jae    201b\n\t"
> > +     "202:\n\t"
> > +     "cmpq   $64, %%rdx\n\t"
> > +     "jb     203f\n\t"
> > +     "vmovntdqa (%%rsi), %%ymm0\n\t"
> > +     "vmovntdqa 32(%%rsi), %%ymm1\n\t"
> > +     "vmovdqu  %%ymm0, (%%rdi)\n\t"
> > +     "vmovdqu  %%ymm1, 32(%%rdi)\n\t"
> > +     "addq   $64, %%rsi\n\t"
> > +     "addq   $64, %%rdi\n\t"
> > +     "subq   $64, %%rdx\n\t"
> > +     "jz     %l[done]\n\t"
> > +     "203:\n\t"
> > +     "cmpq   $32, %%rdx\n\t"
> > +     "jb     204f\n\t"
> > +     "vmovntdqa (%%rsi), %%ymm0\n\t"
> > +     "vmovdqu  %%ymm0, (%%rdi)\n\t"
> > +     "addq   $32, %%rsi\n\t"
> > +     "addq   $32, %%rdi\n\t"
> > +     "subq   $32, %%rdx\n\t"
> > +     "jz     %l[done]\n\t"
> > +     "204:\n\t"
> > +     "cmpb   $16, %%dl\n\t"
> > +     "jb     205f\n\t"
> > +     "vmovntdqa (%%rsi), %%xmm0\n\t"
> > +     "vmovdqu  %%xmm0, (%%rdi)\n\t"
> > +     "addq   $16, %%rsi\n\t"
> > +     "addq   $16, %%rdi\n\t"
> > +     "subq   $16, %%rdx\n\t"
> > +     "jz     %l[done]\n\t"
> > +     "205:\n\t"
> > +     "cmpb   $2, %%dl\n\t"
> > +     "jb     208f\n\t"
> > +     "cmpb   $4, %%dl\n\t"
> > +     "jbe    207f\n\t"
> > +     "cmpb   $8, %%dl\n\t"
> > +     "jbe    206f\n\t"
> > +     "movq   -8(%%rsi,%%rdx), %%rcx\n\t"
> > +     "movq   (%%rsi), %%rsi\n\t"
> > +     "movq   %%rcx, -8(%%rdi,%%rdx)\n\t"
> > +     "movq   %%rsi, (%%rdi)\n\t"
> > +     "jmp    %l[done]\n\t"
> > +     "206:\n\t"
> > +     "movl   -4(%%rsi,%%rdx), %%ecx\n\t"
> > +     "movl   (%%rsi), %%esi\n\t"
> > +     "movl   %%ecx, -4(%%rdi,%%rdx)\n\t"
> > +     "movl   %%esi, (%%rdi)\n\t"
> > +     "jmp    %l[done]\n\t"
> > +     "207:\n\t"
> > +     "movzwl -2(%%rsi,%%rdx), %%ecx\n\t"
> > +     "movzwl (%%rsi), %%esi\n\t"
> > +     "movw   %%cx, -2(%%rdi,%%rdx)\n\t"
> > +     "movw   %%si, (%%rdi)\n\t"
> > +     "jmp    %l[done]\n\t"
> > +     "208:\n\t"
> > +     "movzbl (%%rsi), %%ecx\n\t"
> > +     "movb   %%cl, (%%rdi)"
> > +     :
> > +     : "r"(src), "r"(dst), "r"(size)
> > +     : "rcx", "rdx", "rsi", "rdi", "ymm0", "ymm1", "ymm2", "ymm3", "me=
mory"
> > +     : done
> > +     );
> > +done:
> > +     return dst;
> > +}
> > +
> > +static __rte_always_inline void *
> > +rte_memcpy_generic(void *dst, const void *src, size_t len) {
> > +     asm goto("movq  %0, %%rsi\n\t"
> > +     "movq   %1, %%rdi\n\t"
> > +     "movq   %2, %%rdx\n\t"
> > +     "movq    %%rdi, %%rax\n\t"
> > +     "cmp     $32, %%rdx\n\t"
> > +     "jb      101f\n\t"
> > +     "cmp     $(32 * 2), %%rdx\n\t"
> > +     "ja      108f\n\t"
> > +     "vmovdqu   (%%rsi), %%ymm0\n\t"
> > +     "vmovdqu   -32(%%rsi,%%rdx), %%ymm1\n\t"
> > +     "vmovdqu   %%ymm0, (%%rdi)\n\t"
> > +     "vmovdqu   %%ymm1, -32(%%rdi,%%rdx)\n\t"
> > +     "vzeroupper\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "101:\n\t"
> > +     /* Less than 1 VEC.  */
> > +     "cmpb    $32, %%dl\n\t"
> > +     "jae     103f\n\t"
> > +     "cmpb    $16, %%dl\n\t"
> > +     "jae     104f\n\t"
> > +     "cmpb    $8, %%dl\n\t"
> > +     "jae     105f\n\t"
> > +     "cmpb    $4, %%dl\n\t"
> > +     "jae     106f\n\t"
> > +     "cmpb    $1, %%dl\n\t"
> > +     "ja      107f\n\t"
> > +     "jb      102f\n\t"
> > +     "movzbl  (%%rsi), %%ecx\n\t"
> > +     "movb    %%cl, (%%rdi)\n\t"
> > +     "102:\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "103:\n\t"
> > +     /* From 32 to 63.  No branch when size =3D=3D 32.  */
> > +     "vmovdqu (%%rsi), %%ymm0\n\t"
> > +     "vmovdqu -32(%%rsi,%%rdx), %%ymm1\n\t"
> > +     "vmovdqu %%ymm0, (%%rdi)\n\t"
> > +     "vmovdqu %%ymm1, -32(%%rdi,%%rdx)\n\t"
> > +     "vzeroupper\n\t"
> > +     "jmp %l[done]\n\t"
> > +     /* From 16 to 31.  No branch when size =3D=3D 16.  */
> > +     "104:\n\t"
> > +     "vmovdqu (%%rsi), %%xmm0\n\t"
> > +     "vmovdqu -16(%%rsi,%%rdx), %%xmm1\n\t"
> > +     "vmovdqu %%xmm0, (%%rdi)\n\t"
> > +     "vmovdqu %%xmm1, -16(%%rdi,%%rdx)\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "105:\n\t"
> > +     /* From 8 to 15.  No branch when size =3D=3D 8.  */
> > +     "movq    -8(%%rsi,%%rdx), %%rcx\n\t"
> > +     "movq    (%%rsi), %%rsi\n\t"
> > +     "movq    %%rcx, -8(%%rdi,%%rdx)\n\t"
> > +     "movq    %%rsi, (%%rdi)\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "106:\n\t"
> > +     /* From 4 to 7.  No branch when size =3D=3D 4.  */
> > +     "movl    -4(%%rsi,%%rdx), %%ecx\n\t"
> > +     "movl    (%%rsi), %%esi\n\t"
> > +     "movl    %%ecx, -4(%%rdi,%%rdx)\n\t"
> > +     "movl    %%esi, (%%rdi)\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "107:\n\t"
> > +     /* From 2 to 3.  No branch when size =3D=3D 2.  */
> > +     "movzwl  -2(%%rsi,%%rdx), %%ecx\n\t"
> > +     "movzwl  (%%rsi), %%esi\n\t"
> > +     "movw    %%cx, -2(%%rdi,%%rdx)\n\t"
> > +     "movw    %%si, (%%rdi)\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "108:\n\t"
> > +     /* More than 2 * VEC and there may be overlap between destination=
 */
> > +     /* and source.  */
> > +     "cmpq    $(32 * 8), %%rdx\n\t"
> > +     "ja      111f\n\t"
> > +     "cmpq    $(32 * 4), %%rdx\n\t"
> > +     "jb      109f\n\t"
> > +     /* Copy from 4 * VEC to 8 * VEC, inclusively. */
> > +     "vmovdqu   (%%rsi), %%ymm0\n\t"
> > +     "vmovdqu   32(%%rsi), %%ymm1\n\t"
> > +     "vmovdqu   (32 * 2)(%%rsi), %%ymm2\n\t"
> > +     "vmovdqu   (32 * 3)(%%rsi), %%ymm3\n\t"
> > +     "vmovdqu   -32(%%rsi,%%rdx), %%ymm4\n\t"
> > +     "vmovdqu   -(32 * 2)(%%rsi,%%rdx), %%ymm5\n\t"
> > +     "vmovdqu   -(32 * 3)(%%rsi,%%rdx), %%ymm6\n\t"
> > +     "vmovdqu   -(32 * 4)(%%rsi,%%rdx), %%ymm7\n\t"
> > +     "vmovdqu   %%ymm0, (%%rdi)\n\t"
> > +     "vmovdqu   %%ymm1, 32(%%rdi)\n\t"
> > +     "vmovdqu   %%ymm2, (32 * 2)(%%rdi)\n\t"
> > +     "vmovdqu   %%ymm3, (32 * 3)(%%rdi)\n\t"
> > +     "vmovdqu   %%ymm4, -32(%%rdi,%%rdx)\n\t"
> > +     "vmovdqu   %%ymm5, -(32 * 2)(%%rdi,%%rdx)\n\t"
> > +     "vmovdqu   %%ymm6, -(32 * 3)(%%rdi,%%rdx)\n\t"
> > +     "vmovdqu   %%ymm7, -(32 * 4)(%%rdi,%%rdx)\n\t"
> > +     "vzeroupper\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "109:\n\t"
> > +     /* Copy from 2 * VEC to 4 * VEC. */
> > +     "vmovdqu   (%%rsi), %%ymm0\n\t"
> > +     "vmovdqu   32(%%rsi), %%ymm1\n\t"
> > +     "vmovdqu   -32(%%rsi,%%rdx), %%ymm2\n\t"
> > +     "vmovdqu   -(32 * 2)(%%rsi,%%rdx), %%ymm3\n\t"
> > +     "vmovdqu   %%ymm0, (%%rdi)\n\t"
> > +     "vmovdqu   %%ymm1, 32(%%rdi)\n\t"
> > +     "vmovdqu   %%ymm2, -32(%%rdi,%%rdx)\n\t"
> > +     "vmovdqu   %%ymm3, -(32 * 2)(%%rdi,%%rdx)\n\t"
> > +     "vzeroupper\n\t"
> > +     "110:\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "111:\n\t"
> > +     "cmpq    %%rsi, %%rdi\n\t"
> > +     "ja      113f\n\t"
> > +     /* Source =3D=3D destination is less common.  */
> > +     "je      110b\n\t"
> > +     /* Load the first VEC and last 4 * VEC to
> > +      * support overlapping addresses.
> > +      */
> > +     "vmovdqu   (%%rsi), %%ymm4\n\t"
> > +     "vmovdqu   -32(%%rsi, %%rdx), %%ymm5\n\t"
> > +     "vmovdqu   -(32 * 2)(%%rsi, %%rdx), %%ymm6\n\t"
> > +     "vmovdqu   -(32 * 3)(%%rsi, %%rdx), %%ymm7\n\t"
> > +     "vmovdqu   -(32 * 4)(%%rsi, %%rdx), %%ymm8\n\t"
> > +     /* Save start and stop of the destination buffer.  */
> > +     "movq    %%rdi, %%r11\n\t"
> > +     "leaq    -32(%%rdi, %%rdx), %%rcx\n\t"
> > +     /* Align destination for aligned stores in the loop.  Compute */
> > +     /* how much destination is misaligned.  */
> > +     "movq    %%rdi, %%r8\n\t"
> > +     "andq    $(32 - 1), %%r8\n\t"
> > +     /* Get the negative of offset for alignment.  */
> > +     "subq    $32, %%r8\n\t"
> > +     /* Adjust source.  */
> > +     "subq    %%r8, %%rsi\n\t"
> > +     /* Adjust destination which should be aligned now.  */
> > +     "subq    %%r8, %%rdi\n\t"
> > +     /* Adjust length.  */
> > +     "addq    %%r8, %%rdx\n\t"
> > +     /* Check non-temporal store threshold.  */
> > +     "cmpq    $(1024*1024), %%rdx\n\t"
> > +     "ja      115f\n\t"
> > +     "112:\n\t"
> > +     /* Copy 4 * VEC a time forward.  */
> > +     "vmovdqu   (%%rsi), %%ymm0\n\t"
> > +     "vmovdqu   32(%%rsi), %%ymm1\n\t"
> > +     "vmovdqu   (32 * 2)(%%rsi), %%ymm2\n\t"
> > +     "vmovdqu   (32 * 3)(%%rsi), %%ymm3\n\t"
> > +     "addq    $(32 * 4), %%rsi\n\t"
> > +     "subq    $(32 * 4), %%rdx\n\t"
> > +     "vmovdqa   %%ymm0, (%%rdi)\n\t"
> > +     "vmovdqa   %%ymm1, 32(%%rdi)\n\t"
> > +     "vmovdqa   %%ymm2, (32 * 2)(%%rdi)\n\t"
> > +     "vmovdqa   %%ymm3, (32 * 3)(%%rdi)\n\t"
> > +     "addq    $(32 * 4), %%rdi\n\t"
> > +     "cmpq    $(32 * 4), %%rdx\n\t"
> > +     "ja      112b\n\t"
> > +     /* Store the last 4 * VEC.  */
> > +     "vmovdqu   %%ymm5, (%%rcx)\n\t"
> > +     "vmovdqu   %%ymm6, -32(%%rcx)\n\t"
> > +     "vmovdqu   %%ymm7, -(32 * 2)(%%rcx)\n\t"
> > +     "vmovdqu   %%ymm8, -(32 * 3)(%%rcx)\n\t"
> > +     /* Store the first VEC.  */
> > +     "vmovdqu   %%ymm4, (%%r11)\n\t"
> > +     "vzeroupper\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "113:\n\t"
> > +     /* Load the first 4*VEC and last VEC to support overlapping addre=
sses.*/
> > +     "vmovdqu   (%%rsi), %%ymm4\n\t"
> > +     "vmovdqu   32(%%rsi), %%ymm5\n\t"
> > +     "vmovdqu   (32 * 2)(%%rsi), %%ymm6\n\t"
> > +     "vmovdqu   (32 * 3)(%%rsi), %%ymm7\n\t"
> > +     "vmovdqu   -32(%%rsi,%%rdx), %%ymm8\n\t"
> > +     /* Save stop of the destination buffer.  */
> > +     "leaq    -32(%%rdi, %%rdx), %%r11\n\t"
> > +     /* Align destination end for aligned stores in the loop.  Compute=
 */
> > +     /* how much destination end is misaligned.  */
> > +     "leaq    -32(%%rsi, %%rdx), %%rcx\n\t"
> > +     "movq    %%r11, %%r9\n\t"
> > +     "movq    %%r11, %%r8\n\t"
> > +     "andq    $(32 - 1), %%r8\n\t"
> > +     /* Adjust source.  */
> > +     "subq    %%r8, %%rcx\n\t"
> > +     /* Adjust the end of destination which should be aligned now.  */
> > +     "subq    %%r8, %%r9\n\t"
> > +     /* Adjust length.  */
> > +     "subq    %%r8, %%rdx\n\t"
> > +      /* Check non-temporal store threshold.  */
> > +     "cmpq    $(1024*1024), %%rdx\n\t"
> > +     "ja      117f\n\t"
> > +     "114:\n\t"
> > +     /* Copy 4 * VEC a time backward.  */
> > +     "vmovdqu   (%%rcx), %%ymm0\n\t"
> > +     "vmovdqu   -32(%%rcx), %%ymm1\n\t"
> > +     "vmovdqu   -(32 * 2)(%%rcx), %%ymm2\n\t"
> > +     "vmovdqu   -(32 * 3)(%%rcx), %%ymm3\n\t"
> > +     "subq    $(32 * 4), %%rcx\n\t"
> > +     "subq    $(32 * 4), %%rdx\n\t"
> > +     "vmovdqa   %%ymm0, (%%r9)\n\t"
> > +     "vmovdqa   %%ymm1, -32(%%r9)\n\t"
> > +     "vmovdqa   %%ymm2, -(32 * 2)(%%r9)\n\t"
> > +     "vmovdqa   %%ymm3, -(32 * 3)(%%r9)\n\t"
> > +     "subq    $(32 * 4), %%r9\n\t"
> > +     "cmpq    $(32 * 4), %%rdx\n\t"
> > +     "ja      114b\n\t"
> > +     /* Store the first 4 * VEC. */
> > +     "vmovdqu   %%ymm4, (%%rdi)\n\t"
> > +     "vmovdqu   %%ymm5, 32(%%rdi)\n\t"
> > +     "vmovdqu   %%ymm6, (32 * 2)(%%rdi)\n\t"
> > +     "vmovdqu   %%ymm7, (32 * 3)(%%rdi)\n\t"
> > +     /* Store the last VEC. */
> > +     "vmovdqu   %%ymm8, (%%r11)\n\t"
> > +     "vzeroupper\n\t"
> > +     "jmp %l[done]\n\t"
> > +
> > +     "115:\n\t"
> > +     /* Don't use non-temporal store if there is overlap between */
> > +     /* destination and source since destination may be in cache */
> > +     /* when source is loaded. */
> > +     "leaq    (%%rdi, %%rdx), %%r10\n\t"
> > +     "cmpq    %%r10, %%rsi\n\t"
> > +     "jb      112b\n\t"
> > +     "116:\n\t"
> > +     /* Copy 4 * VEC a time forward with non-temporal stores.  */
> > +     "prefetcht0 (32*4*2)(%%rsi)\n\t"
> > +     "prefetcht0 (32*4*2 + 64)(%%rsi)\n\t"
> > +     "prefetcht0 (32*4*3)(%%rsi)\n\t"
> > +     "prefetcht0 (32*4*3 + 64)(%%rsi)\n\t"
> > +     "vmovdqu   (%%rsi), %%ymm0\n\t"
> > +     "vmovdqu   32(%%rsi), %%ymm1\n\t"
> > +     "vmovdqu   (32 * 2)(%%rsi), %%ymm2\n\t"
> > +     "vmovdqu   (32 * 3)(%%rsi), %%ymm3\n\t"
> > +     "addq    $(32*4), %%rsi\n\t"
> > +     "subq    $(32*4), %%rdx\n\t"
> > +     "vmovntdq  %%ymm0, (%%rdi)\n\t"
> > +     "vmovntdq  %%ymm1, 32(%%rdi)\n\t"
> > +     "vmovntdq  %%ymm2, (32 * 2)(%%rdi)\n\t"
> > +     "vmovntdq  %%ymm3, (32 * 3)(%%rdi)\n\t"
> > +     "addq    $(32*4), %%rdi\n\t"
> > +     "cmpq    $(32*4), %%rdx\n\t"
> > +     "ja      116b\n\t"
> > +     "sfence\n\t"
> > +     /* Store the last 4 * VEC.  */
> > +     "vmovdqu   %%ymm5, (%%rcx)\n\t"
> > +     "vmovdqu   %%ymm6, -32(%%rcx)\n\t"
> > +     "vmovdqu   %%ymm7, -(32 * 2)(%%rcx)\n\t"
> > +     "vmovdqu   %%ymm8, -(32 * 3)(%%rcx)\n\t"
> > +     /* Store the first VEC.  */
> > +     "vmovdqu   %%ymm4, (%%r11)\n\t"
> > +     "vzeroupper\n\t"
> > +     "jmp %l[done]\n\t"
> > +     "117:\n\t"
> > +     /* Don't use non-temporal store if there is overlap between */
> > +     /* destination and source since destination may be in cache */
> > +     /* when source is loaded.  */
> > +     "leaq    (%%rcx, %%rdx), %%r10\n\t"
> > +     "cmpq    %%r10, %%r9\n\t"
> > +     "jb      114b\n\t"
> > +     "118:\n\t"
> > +     /* Copy 4 * VEC a time backward with non-temporal stores. */
> > +     "prefetcht0 (-32 * 4 * 2)(%%rcx)\n\t"
> > +     "prefetcht0 (-32 * 4 * 2 - 64)(%%rcx)\n\t"
> > +     "prefetcht0 (-32 * 4 * 3)(%%rcx)\n\t"
> > +     "prefetcht0 (-32 * 4 * 3 - 64)(%%rcx)\n\t"
> > +     "vmovdqu   (%%rcx), %%ymm0\n\t"
> > +     "vmovdqu   -32(%%rcx), %%ymm1\n\t"
> > +     "vmovdqu   -(32 * 2)(%%rcx), %%ymm2\n\t"
> > +     "vmovdqu   -(32 * 3)(%%rcx), %%ymm3\n\t"
> > +     "subq    $(32*4), %%rcx\n\t"
> > +     "subq    $(32*4), %%rdx\n\t"
> > +     "vmovntdq  %%ymm0, (%%r9)\n\t"
> > +     "vmovntdq  %%ymm1, -32(%%r9)\n\t"
> > +     "vmovntdq  %%ymm2, -(32 * 2)(%%r9)\n\t"
> > +     "vmovntdq  %%ymm3, -(32 * 3)(%%r9)\n\t"
> > +     "subq    $(32 * 4), %%r9\n\t"
> > +     "cmpq    $(32 * 4), %%rdx\n\t"
> > +     "ja      118b\n\t"
> > +     "sfence\n\t"
> > +     /* Store the first 4 * VEC.  */
> > +     "vmovdqu   %%ymm4, (%%rdi)\n\t"
> > +     "vmovdqu   %%ymm5, 32(%%rdi)\n\t"
> > +     "vmovdqu   %%ymm6, (32 * 2)(%%rdi)\n\t"
> > +     "vmovdqu   %%ymm7, (32 * 3)(%%rdi)\n\t"
> > +     /* Store the last VEC.  */
> > +     "vmovdqu   %%ymm8, (%%r11)\n\t"
> > +     "vzeroupper\n\t"
> > +     "jmp %l[done]"
> > +     :
> > +     : "r"(src), "r"(dst), "r"(len)
> > +     : "rax", "rcx", "rdx", "rdi", "rsi", "r8", "r9", "r10", "r11", "r=
12", "ymm0",
> > +     "ymm1", "ymm2", "ymm3", "ymm4", "ymm5", "ymm6", "ymm7", "ymm8", "=
memory"
> > +     : done
> > +     );
> > +done:
> > +     return dst;
> > +}
>
>