From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60B56A0C43; Thu, 21 Oct 2021 21:29:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 70CA9410EA; Thu, 21 Oct 2021 21:29:02 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2044.outbound.protection.outlook.com [40.107.92.44]) by mails.dpdk.org (Postfix) with ESMTP id 5A8064003E for ; Thu, 21 Oct 2021 19:10:36 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KLtdilJI91sod238cLZDkyiVkt1G/Pz4OKRfkwxzhUgOruT76CGMq4yl8yvtoqE96bpinQGdZ1LKZcNFLDPcBIS0ADes4kWyWonVQHh4INhhWs3nXiwbcLwbpjB8zPG3wLXrpgVBaHRatKNSbIHT5UFLPfXWCBlyn+VH4z3uBqg8R6xkgRrzSzWH8+26TKNhlplum1S0YGRqsU27FBq13P8oOtFfyfOk76KSFriaWCVpJ+4PZZ59mExM7fSLY64r6RsqkhSzK6b2M6UNbNRhbiywLgHVFmPJgkkK7c9CCwQfOXN4VmQF5yZkCOvXJ7m1K+7LVqoTe8PNcjx1MpPfJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=z+LHVAgvLEU1mmEvkCHFSlSIMXaSpwWGFLE1CKHovR0=; b=bAO9G93LpB9oAj6TYsO5cIYlGR37XC7As6iJz8PGmj5GLIXmbtcBC2rQHmdPTDV13yqbfGnSJqoxOzYX+qcGAfiUPvNm4a4NeFLFYtzOVh46OJqu6ARGqf6icGfNcGyMPe7t2kvw48l2nIE60/w9RnWPyTYTVKUeFA0+pgH7oFDPevcIeas5GUkf5SEgXKHuxTUP3l/5ubSeMnUTNHiFO2T8HM+Ge2dTsZolYI0PRbVw7AzVnPnqMyVA0zCM7CbUVatRVn/XFwTIvTtDYHZaJijP4fu56n9cwhlzhg9x/fGdCW508ufM/klix9YiV0hkXAJ4dP5Q4tUMRGEakjIAGQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=z+LHVAgvLEU1mmEvkCHFSlSIMXaSpwWGFLE1CKHovR0=; b=5T1X/dwHZa3KOn12QCRy4VUygtCVU7xaHaV7ure3OjZLHq0fG1HuU3+nOEoUDX8cjQdFqgZAIyFqzNJ+CcsGgqM9x6YlPBGrhgwIRT1Z0g+kpKa7AB1JMR7s26R4h4VG+liBsowabK2Ia9Agy8HYn88IhK+IXnj9IlS4GB8ft2c= Received: from BY5PR12MB3681.namprd12.prod.outlook.com (2603:10b6:a03:194::16) by BY5PR12MB3746.namprd12.prod.outlook.com (2603:10b6:a03:1a7::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.15; Thu, 21 Oct 2021 17:10:31 +0000 Received: from BY5PR12MB3681.namprd12.prod.outlook.com ([fe80::1419:77f5:8c88:c53c]) by BY5PR12MB3681.namprd12.prod.outlook.com ([fe80::1419:77f5:8c88:c53c%7]) with mapi id 15.20.4628.018; Thu, 21 Oct 2021 17:10:31 +0000 From: "Song, Keesang" To: Thomas Monjalon , Aman Kumar CC: "dev@dpdk.org" , "rasland@nvidia.com" , "asafp@nvidia.com" , "shys@nvidia.com" , "viacheslavo@nvidia.com" , "akozyrev@nvidia.com" , "matan@nvidia.com" , "anatoly.burakov@intel.com" , "aman.kumar@vvdntech.in" , "jerinjacobk@gmail.com" , "bruce.richardson@intel.com" , "konstantin.ananyev@intel.com" , "david.marchand@redhat.com" Thread-Topic: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy routine to eal Thread-Index: AQHXxNbCgvZjpzZirEenQtfW+ZBt0qvaQKyAgANxrPA= Date: Thu, 21 Oct 2021 17:10:31 +0000 Message-ID: References: <20210823084411.29592-1-aman.kumar@vvdntech.in> <20211019104724.19416-1-aman.kumar@vvdntech.in> <1936725.1xRPv8SPVf@thomas> In-Reply-To: <1936725.1xRPv8SPVf@thomas> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_ActionId=02602cd9-ebef-46b4-8668-061eb608129c; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_ContentBits=0; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Enabled=true; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Method=Standard; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Name=AMD Official Use Only-AIP 2.0; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_SetDate=2021-10-21T17:06:38Z; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d; authentication-results: monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=none action=none header.from=amd.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 85a6afc7-03f0-4c6f-ef09-08d994b5b025 x-ms-traffictypediagnostic: BY5PR12MB3746: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:4125; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 2YNnhPBd4v2mmFHmzYgdBCT5vkoRTtrUsgru9yzdVUSbnzMRW1+MEDbB3/ZVJO+UGsGe1GJX7sZW9isTJoGJpCsGnyJ4y7xy7A+XNGx+ziVCgLdfs45/k2NMxfUXRojr4Wpd6wY9AvWL1K23xj9ul4hktGiICankIq/zWLMTGO5NOrh2g0ndEdsIP7yU2fBBgUwybwM/GnSyohX8zvlLzpVzcZsTjSHBXIN2sT4dncP++zchTuxMy/wn49AFS0nK0rYm04OvzpKsSTMpm8JKHjwjNCEvIFzl4OWOc8R0/+flZepQDKS9j1CDYPwhlCCHxdwxBKjrC/Mv5wIYHM6DKuM0nXsxtK0BFFnZ6+ooc1HjwNhCDSBOsR9pfPEjMbli6SntPpm9KxMzyrMhOQ4s88lEGXGuICZOlvX9KfvHxGxIbDWG31a0pG9T/GNsL/7nDYAX6s6gtP7Jzq/Wv+64AkSLG8xUtuv0HDRXx0ozSgCg1tjVmNFCc8aQtKm1UZOxCLCpzEMgSB1hK+EygNMdopHuACciqaR0I6BGzRwQZJrBi910dEwxXi1aS3voRGyxmhUlSlhxha3fpN/nlOZ32WkrMDkJVgf02QT22reZxqbj0v53r3/1oX3rADsdAlQHt3YrKbHTaPBcWLd1DawYWcTcouIBoSxV0noqx2K5/tSByXfxEfSvG/93626SA4VlcKVhFQcf+27fDyDwzrMcTg== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BY5PR12MB3681.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(38070700005)(76116006)(66556008)(66946007)(54906003)(110136005)(316002)(53546011)(64756008)(7416002)(71200400001)(26005)(4326008)(30864003)(66476007)(7696005)(9686003)(55016002)(8936002)(6506007)(86362001)(66446008)(186003)(52536014)(508600001)(33656002)(2906002)(5660300002)(122000001)(8676002)(38100700002)(83380400001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?knEfVG3MPs3HPMgIwnl60huywPa0n0ZYMwqRawjgB/VVP4hxtfvQ8OcIYD1u?= =?us-ascii?Q?MUM0xEyE0N+EpYmatHHHehJLIuWfkT0wcC2TTyRq41P/ozBFlICcOyHd8frz?= =?us-ascii?Q?BgExoTKIcy19i1ivT4eqaULy1IQOFKFV4Qedzqhy5tOZoQwbJLxEFy1j7OXt?= =?us-ascii?Q?XbMi/bu0gfNB8I0dfHobiVx7e+2xQYBq/eHVHBTiybiB0UFVsB4Z3I/E04J5?= =?us-ascii?Q?jMICrugzZOuGIub4qmHlwE3hTeilcBb1jy8+P1uUak8vCcOwZA7MotI9w074?= =?us-ascii?Q?nv17+hpNFvheCCXH+ChDeUuGVbB9x25AVdl2bWljfQ8RScVcyIQGmycY2Kz0?= =?us-ascii?Q?93f+0TBJuweXDQzA48PavTycFF9ncHpOYATojmAQ7MnU+mGIAcBrepNhLkYZ?= =?us-ascii?Q?CLoFGw7ZIXw2xlXSHH1jbWLvCf9fhOD4NtvG/IBb3FCF8NNjMJSiZLkZcPOd?= =?us-ascii?Q?4nJ73cQ9dFuEF7aEf1N6h95Lo2o7ixReVnGBBxY+LpswaN8Fw2BDmy7hJKMD?= =?us-ascii?Q?yPkN4xz2ReZbmGc5m8erJfyFDKHxySV63eQii4px0qpAW9cIMD6Y62zQ6IDI?= =?us-ascii?Q?sfr6m0DSIXFj1q2GZoWLxTjlD4n7ldlE3UQIBYySwnMvLiQgj/UZTzQKLE/Y?= =?us-ascii?Q?B/jVycb8xjC68+e1q3wlAzf/vlod0mjbrc8WVIwgAK5L8O6Lt6/T6t6vdPFW?= =?us-ascii?Q?ZRzqOI/yopVgCUokBqFxZU1CEy8EdngDF1o5S6KpolhGeRlSavYeMRc3rlW6?= =?us-ascii?Q?weSCcudrUrKOZWLhotlJTf4uUD8dXL7JVSFdPvL4Z/Ff2puxFs0Rjia/scUx?= =?us-ascii?Q?JOWc1SzrcvCtdYfIBFe5b9PMBRuMskSE/npf0m2LT+uInh29DmEXYZcwhprJ?= =?us-ascii?Q?esgcprpDGZuAWGZaIcwnO2wEOeTRQ8Daw3EzxL/ZCTFqdi9/4wG6q9i8QQa+?= =?us-ascii?Q?UR8pSzpUA8JfPgIYPDgnnE5gzpae1evUFV65I/8xw34MIPDwU0rt9cKFLwlT?= =?us-ascii?Q?6KiosmfR2p9/Dtk6Eijtm0CQEwe/sfV5NG9OyNI20258IOnMZ5s9dzFcwn0l?= =?us-ascii?Q?lYMe1j9dMfnoi3OXLCCOT3OITdDy726vxrXLk03bKt/dGUKmOnflaXb2Y9+C?= =?us-ascii?Q?msdaCmQA7TqmELEsBAcN7bfKjM56RL0OMzMHMB342xic62oywX+Yt1THeoIp?= =?us-ascii?Q?g1JgUw1aaP0nnA/NsrO0n5akKLJWHp3OMxZTvGrPYxNSkQkBtbCkIk9lHa8G?= =?us-ascii?Q?AbgcGa0mUKS5sIl8HMk7ZEYRPFTQK/3sxwNLTgufme5lmsFZkQdQIp6pXzJm?= =?us-ascii?Q?DkahRG3yUJ9W240/mFFITwuA/2neKtVhi5/7dgbgZ+wSC/raMqE6C9O4tZL+?= =?us-ascii?Q?NJxqCuVhA4FOfcfvHyrJz7wiVusrduxJ10LCoNF5sfbQNEfrt6KzJ2jD+auo?= =?us-ascii?Q?h1DSOB3Me0Y=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: BY5PR12MB3681.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 85a6afc7-03f0-4c6f-ef09-08d994b5b025 X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Oct 2021 17:10:31.4163 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: keessong@amd.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3746 X-Mailman-Approved-At: Thu, 21 Oct 2021 21:29:00 +0200 Subject: Re: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy routine to eal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" [AMD Official Use Only] Hi Thomas, I hope this can make some explanation to your question. We(AMD Linux library support team) have implemented the custom tailored mem= cpy solution which is a close match with DPDK use case requirements like th= e below. 1) Min 64B length data packet with cache aligned Source and Destinatio= n. 2) Non-Temporal load and temporal store for cache aligned source for b= oth RX and TX paths. Could not implement the non-temporal store for TX_PATH= , as non-Temporal load/stores works only with 32B aligned addresses for AVX= 2 3) This solution works for all AVX2 supported AMD machines. Internally we have completed the integrity testing and benchmarking of the = solution and found gains of 8.4% to 14.5% specifically on Milan CPU(3rd Gen= of EPYC Processor) Thanks for your support, Keesang -----Original Message----- From: Thomas Monjalon Sent: Tuesday, October 19, 2021 5:31 AM To: Aman Kumar Cc: dev@dpdk.org; rasland@nvidia.com; asafp@nvidia.com; shys@nvidia.com; vi= acheslavo@nvidia.com; akozyrev@nvidia.com; matan@nvidia.com; anatoly.burako= v@intel.com; Song, Keesang ; aman.kumar@vvdntech.in; = jerinjacobk@gmail.com; bruce.richardson@intel.com; konstantin.ananyev@intel= .com; david.marchand@redhat.com Subject: Re: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy routin= e to eal [CAUTION: External Email] 19/10/2021 12:47, Aman Kumar: > This patch provides rte_memcpy* calls optimized for AMD EPYC > platforms. Use config/x86/x86_amd_epyc_linux_gcc as cross-file with > meson to build dpdk for AMD EPYC platforms. Please split in 2 patches: platform & memcpy. What optimization is specific to EPYC? I dislike the asm code below. What is AMD specific inside? Can it use compiler intrinsics as it is done elsewhere? > +static __rte_always_inline void * > +rte_memcpy_aligned_ntload_tstore16_amdepyc2(void *dst, > + const void *src, > + size_t size) { > + asm volatile goto("movq %0, %%rsi\n\t" > + "movq %1, %%rdi\n\t" > + "movq %2, %%rdx\n\t" > + "cmpq $(128), %%rdx\n\t" > + "jb 202f\n\t" > + "201:\n\t" > + "vmovntdqa (%%rsi), %%ymm0\n\t" > + "vmovntdqa 32(%%rsi), %%ymm1\n\t" > + "vmovntdqa 64(%%rsi), %%ymm2\n\t" > + "vmovntdqa 96(%%rsi), %%ymm3\n\t" > + "vmovdqu %%ymm0, (%%rdi)\n\t" > + "vmovdqu %%ymm1, 32(%%rdi)\n\t" > + "vmovdqu %%ymm2, 64(%%rdi)\n\t" > + "vmovdqu %%ymm3, 96(%%rdi)\n\t" > + "addq $128, %%rsi\n\t" > + "addq $128, %%rdi\n\t" > + "subq $128, %%rdx\n\t" > + "jz %l[done]\n\t" > + "cmpq $128, %%rdx\n\t" /*Vector Size 32B. */ > + "jae 201b\n\t" > + "202:\n\t" > + "cmpq $64, %%rdx\n\t" > + "jb 203f\n\t" > + "vmovntdqa (%%rsi), %%ymm0\n\t" > + "vmovntdqa 32(%%rsi), %%ymm1\n\t" > + "vmovdqu %%ymm0, (%%rdi)\n\t" > + "vmovdqu %%ymm1, 32(%%rdi)\n\t" > + "addq $64, %%rsi\n\t" > + "addq $64, %%rdi\n\t" > + "subq $64, %%rdx\n\t" > + "jz %l[done]\n\t" > + "203:\n\t" > + "cmpq $32, %%rdx\n\t" > + "jb 204f\n\t" > + "vmovntdqa (%%rsi), %%ymm0\n\t" > + "vmovdqu %%ymm0, (%%rdi)\n\t" > + "addq $32, %%rsi\n\t" > + "addq $32, %%rdi\n\t" > + "subq $32, %%rdx\n\t" > + "jz %l[done]\n\t" > + "204:\n\t" > + "cmpb $16, %%dl\n\t" > + "jb 205f\n\t" > + "vmovntdqa (%%rsi), %%xmm0\n\t" > + "vmovdqu %%xmm0, (%%rdi)\n\t" > + "addq $16, %%rsi\n\t" > + "addq $16, %%rdi\n\t" > + "subq $16, %%rdx\n\t" > + "jz %l[done]\n\t" > + "205:\n\t" > + "cmpb $2, %%dl\n\t" > + "jb 208f\n\t" > + "cmpb $4, %%dl\n\t" > + "jbe 207f\n\t" > + "cmpb $8, %%dl\n\t" > + "jbe 206f\n\t" > + "movq -8(%%rsi,%%rdx), %%rcx\n\t" > + "movq (%%rsi), %%rsi\n\t" > + "movq %%rcx, -8(%%rdi,%%rdx)\n\t" > + "movq %%rsi, (%%rdi)\n\t" > + "jmp %l[done]\n\t" > + "206:\n\t" > + "movl -4(%%rsi,%%rdx), %%ecx\n\t" > + "movl (%%rsi), %%esi\n\t" > + "movl %%ecx, -4(%%rdi,%%rdx)\n\t" > + "movl %%esi, (%%rdi)\n\t" > + "jmp %l[done]\n\t" > + "207:\n\t" > + "movzwl -2(%%rsi,%%rdx), %%ecx\n\t" > + "movzwl (%%rsi), %%esi\n\t" > + "movw %%cx, -2(%%rdi,%%rdx)\n\t" > + "movw %%si, (%%rdi)\n\t" > + "jmp %l[done]\n\t" > + "208:\n\t" > + "movzbl (%%rsi), %%ecx\n\t" > + "movb %%cl, (%%rdi)" > + : > + : "r"(src), "r"(dst), "r"(size) > + : "rcx", "rdx", "rsi", "rdi", "ymm0", "ymm1", "ymm2", "ymm3", "memo= ry" > + : done > + ); > +done: > + return dst; > +} > + > +static __rte_always_inline void * > +rte_memcpy_generic(void *dst, const void *src, size_t len) { > + asm goto("movq %0, %%rsi\n\t" > + "movq %1, %%rdi\n\t" > + "movq %2, %%rdx\n\t" > + "movq %%rdi, %%rax\n\t" > + "cmp $32, %%rdx\n\t" > + "jb 101f\n\t" > + "cmp $(32 * 2), %%rdx\n\t" > + "ja 108f\n\t" > + "vmovdqu (%%rsi), %%ymm0\n\t" > + "vmovdqu -32(%%rsi,%%rdx), %%ymm1\n\t" > + "vmovdqu %%ymm0, (%%rdi)\n\t" > + "vmovdqu %%ymm1, -32(%%rdi,%%rdx)\n\t" > + "vzeroupper\n\t" > + "jmp %l[done]\n\t" > + "101:\n\t" > + /* Less than 1 VEC. */ > + "cmpb $32, %%dl\n\t" > + "jae 103f\n\t" > + "cmpb $16, %%dl\n\t" > + "jae 104f\n\t" > + "cmpb $8, %%dl\n\t" > + "jae 105f\n\t" > + "cmpb $4, %%dl\n\t" > + "jae 106f\n\t" > + "cmpb $1, %%dl\n\t" > + "ja 107f\n\t" > + "jb 102f\n\t" > + "movzbl (%%rsi), %%ecx\n\t" > + "movb %%cl, (%%rdi)\n\t" > + "102:\n\t" > + "jmp %l[done]\n\t" > + "103:\n\t" > + /* From 32 to 63. No branch when size =3D=3D 32. */ > + "vmovdqu (%%rsi), %%ymm0\n\t" > + "vmovdqu -32(%%rsi,%%rdx), %%ymm1\n\t" > + "vmovdqu %%ymm0, (%%rdi)\n\t" > + "vmovdqu %%ymm1, -32(%%rdi,%%rdx)\n\t" > + "vzeroupper\n\t" > + "jmp %l[done]\n\t" > + /* From 16 to 31. No branch when size =3D=3D 16. */ > + "104:\n\t" > + "vmovdqu (%%rsi), %%xmm0\n\t" > + "vmovdqu -16(%%rsi,%%rdx), %%xmm1\n\t" > + "vmovdqu %%xmm0, (%%rdi)\n\t" > + "vmovdqu %%xmm1, -16(%%rdi,%%rdx)\n\t" > + "jmp %l[done]\n\t" > + "105:\n\t" > + /* From 8 to 15. No branch when size =3D=3D 8. */ > + "movq -8(%%rsi,%%rdx), %%rcx\n\t" > + "movq (%%rsi), %%rsi\n\t" > + "movq %%rcx, -8(%%rdi,%%rdx)\n\t" > + "movq %%rsi, (%%rdi)\n\t" > + "jmp %l[done]\n\t" > + "106:\n\t" > + /* From 4 to 7. No branch when size =3D=3D 4. */ > + "movl -4(%%rsi,%%rdx), %%ecx\n\t" > + "movl (%%rsi), %%esi\n\t" > + "movl %%ecx, -4(%%rdi,%%rdx)\n\t" > + "movl %%esi, (%%rdi)\n\t" > + "jmp %l[done]\n\t" > + "107:\n\t" > + /* From 2 to 3. No branch when size =3D=3D 2. */ > + "movzwl -2(%%rsi,%%rdx), %%ecx\n\t" > + "movzwl (%%rsi), %%esi\n\t" > + "movw %%cx, -2(%%rdi,%%rdx)\n\t" > + "movw %%si, (%%rdi)\n\t" > + "jmp %l[done]\n\t" > + "108:\n\t" > + /* More than 2 * VEC and there may be overlap between destination *= / > + /* and source. */ > + "cmpq $(32 * 8), %%rdx\n\t" > + "ja 111f\n\t" > + "cmpq $(32 * 4), %%rdx\n\t" > + "jb 109f\n\t" > + /* Copy from 4 * VEC to 8 * VEC, inclusively. */ > + "vmovdqu (%%rsi), %%ymm0\n\t" > + "vmovdqu 32(%%rsi), %%ymm1\n\t" > + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" > + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" > + "vmovdqu -32(%%rsi,%%rdx), %%ymm4\n\t" > + "vmovdqu -(32 * 2)(%%rsi,%%rdx), %%ymm5\n\t" > + "vmovdqu -(32 * 3)(%%rsi,%%rdx), %%ymm6\n\t" > + "vmovdqu -(32 * 4)(%%rsi,%%rdx), %%ymm7\n\t" > + "vmovdqu %%ymm0, (%%rdi)\n\t" > + "vmovdqu %%ymm1, 32(%%rdi)\n\t" > + "vmovdqu %%ymm2, (32 * 2)(%%rdi)\n\t" > + "vmovdqu %%ymm3, (32 * 3)(%%rdi)\n\t" > + "vmovdqu %%ymm4, -32(%%rdi,%%rdx)\n\t" > + "vmovdqu %%ymm5, -(32 * 2)(%%rdi,%%rdx)\n\t" > + "vmovdqu %%ymm6, -(32 * 3)(%%rdi,%%rdx)\n\t" > + "vmovdqu %%ymm7, -(32 * 4)(%%rdi,%%rdx)\n\t" > + "vzeroupper\n\t" > + "jmp %l[done]\n\t" > + "109:\n\t" > + /* Copy from 2 * VEC to 4 * VEC. */ > + "vmovdqu (%%rsi), %%ymm0\n\t" > + "vmovdqu 32(%%rsi), %%ymm1\n\t" > + "vmovdqu -32(%%rsi,%%rdx), %%ymm2\n\t" > + "vmovdqu -(32 * 2)(%%rsi,%%rdx), %%ymm3\n\t" > + "vmovdqu %%ymm0, (%%rdi)\n\t" > + "vmovdqu %%ymm1, 32(%%rdi)\n\t" > + "vmovdqu %%ymm2, -32(%%rdi,%%rdx)\n\t" > + "vmovdqu %%ymm3, -(32 * 2)(%%rdi,%%rdx)\n\t" > + "vzeroupper\n\t" > + "110:\n\t" > + "jmp %l[done]\n\t" > + "111:\n\t" > + "cmpq %%rsi, %%rdi\n\t" > + "ja 113f\n\t" > + /* Source =3D=3D destination is less common. */ > + "je 110b\n\t" > + /* Load the first VEC and last 4 * VEC to > + * support overlapping addresses. > + */ > + "vmovdqu (%%rsi), %%ymm4\n\t" > + "vmovdqu -32(%%rsi, %%rdx), %%ymm5\n\t" > + "vmovdqu -(32 * 2)(%%rsi, %%rdx), %%ymm6\n\t" > + "vmovdqu -(32 * 3)(%%rsi, %%rdx), %%ymm7\n\t" > + "vmovdqu -(32 * 4)(%%rsi, %%rdx), %%ymm8\n\t" > + /* Save start and stop of the destination buffer. */ > + "movq %%rdi, %%r11\n\t" > + "leaq -32(%%rdi, %%rdx), %%rcx\n\t" > + /* Align destination for aligned stores in the loop. Compute */ > + /* how much destination is misaligned. */ > + "movq %%rdi, %%r8\n\t" > + "andq $(32 - 1), %%r8\n\t" > + /* Get the negative of offset for alignment. */ > + "subq $32, %%r8\n\t" > + /* Adjust source. */ > + "subq %%r8, %%rsi\n\t" > + /* Adjust destination which should be aligned now. */ > + "subq %%r8, %%rdi\n\t" > + /* Adjust length. */ > + "addq %%r8, %%rdx\n\t" > + /* Check non-temporal store threshold. */ > + "cmpq $(1024*1024), %%rdx\n\t" > + "ja 115f\n\t" > + "112:\n\t" > + /* Copy 4 * VEC a time forward. */ > + "vmovdqu (%%rsi), %%ymm0\n\t" > + "vmovdqu 32(%%rsi), %%ymm1\n\t" > + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" > + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" > + "addq $(32 * 4), %%rsi\n\t" > + "subq $(32 * 4), %%rdx\n\t" > + "vmovdqa %%ymm0, (%%rdi)\n\t" > + "vmovdqa %%ymm1, 32(%%rdi)\n\t" > + "vmovdqa %%ymm2, (32 * 2)(%%rdi)\n\t" > + "vmovdqa %%ymm3, (32 * 3)(%%rdi)\n\t" > + "addq $(32 * 4), %%rdi\n\t" > + "cmpq $(32 * 4), %%rdx\n\t" > + "ja 112b\n\t" > + /* Store the last 4 * VEC. */ > + "vmovdqu %%ymm5, (%%rcx)\n\t" > + "vmovdqu %%ymm6, -32(%%rcx)\n\t" > + "vmovdqu %%ymm7, -(32 * 2)(%%rcx)\n\t" > + "vmovdqu %%ymm8, -(32 * 3)(%%rcx)\n\t" > + /* Store the first VEC. */ > + "vmovdqu %%ymm4, (%%r11)\n\t" > + "vzeroupper\n\t" > + "jmp %l[done]\n\t" > + "113:\n\t" > + /* Load the first 4*VEC and last VEC to support overlapping address= es.*/ > + "vmovdqu (%%rsi), %%ymm4\n\t" > + "vmovdqu 32(%%rsi), %%ymm5\n\t" > + "vmovdqu (32 * 2)(%%rsi), %%ymm6\n\t" > + "vmovdqu (32 * 3)(%%rsi), %%ymm7\n\t" > + "vmovdqu -32(%%rsi,%%rdx), %%ymm8\n\t" > + /* Save stop of the destination buffer. */ > + "leaq -32(%%rdi, %%rdx), %%r11\n\t" > + /* Align destination end for aligned stores in the loop. Compute *= / > + /* how much destination end is misaligned. */ > + "leaq -32(%%rsi, %%rdx), %%rcx\n\t" > + "movq %%r11, %%r9\n\t" > + "movq %%r11, %%r8\n\t" > + "andq $(32 - 1), %%r8\n\t" > + /* Adjust source. */ > + "subq %%r8, %%rcx\n\t" > + /* Adjust the end of destination which should be aligned now. */ > + "subq %%r8, %%r9\n\t" > + /* Adjust length. */ > + "subq %%r8, %%rdx\n\t" > + /* Check non-temporal store threshold. */ > + "cmpq $(1024*1024), %%rdx\n\t" > + "ja 117f\n\t" > + "114:\n\t" > + /* Copy 4 * VEC a time backward. */ > + "vmovdqu (%%rcx), %%ymm0\n\t" > + "vmovdqu -32(%%rcx), %%ymm1\n\t" > + "vmovdqu -(32 * 2)(%%rcx), %%ymm2\n\t" > + "vmovdqu -(32 * 3)(%%rcx), %%ymm3\n\t" > + "subq $(32 * 4), %%rcx\n\t" > + "subq $(32 * 4), %%rdx\n\t" > + "vmovdqa %%ymm0, (%%r9)\n\t" > + "vmovdqa %%ymm1, -32(%%r9)\n\t" > + "vmovdqa %%ymm2, -(32 * 2)(%%r9)\n\t" > + "vmovdqa %%ymm3, -(32 * 3)(%%r9)\n\t" > + "subq $(32 * 4), %%r9\n\t" > + "cmpq $(32 * 4), %%rdx\n\t" > + "ja 114b\n\t" > + /* Store the first 4 * VEC. */ > + "vmovdqu %%ymm4, (%%rdi)\n\t" > + "vmovdqu %%ymm5, 32(%%rdi)\n\t" > + "vmovdqu %%ymm6, (32 * 2)(%%rdi)\n\t" > + "vmovdqu %%ymm7, (32 * 3)(%%rdi)\n\t" > + /* Store the last VEC. */ > + "vmovdqu %%ymm8, (%%r11)\n\t" > + "vzeroupper\n\t" > + "jmp %l[done]\n\t" > + > + "115:\n\t" > + /* Don't use non-temporal store if there is overlap between */ > + /* destination and source since destination may be in cache */ > + /* when source is loaded. */ > + "leaq (%%rdi, %%rdx), %%r10\n\t" > + "cmpq %%r10, %%rsi\n\t" > + "jb 112b\n\t" > + "116:\n\t" > + /* Copy 4 * VEC a time forward with non-temporal stores. */ > + "prefetcht0 (32*4*2)(%%rsi)\n\t" > + "prefetcht0 (32*4*2 + 64)(%%rsi)\n\t" > + "prefetcht0 (32*4*3)(%%rsi)\n\t" > + "prefetcht0 (32*4*3 + 64)(%%rsi)\n\t" > + "vmovdqu (%%rsi), %%ymm0\n\t" > + "vmovdqu 32(%%rsi), %%ymm1\n\t" > + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" > + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" > + "addq $(32*4), %%rsi\n\t" > + "subq $(32*4), %%rdx\n\t" > + "vmovntdq %%ymm0, (%%rdi)\n\t" > + "vmovntdq %%ymm1, 32(%%rdi)\n\t" > + "vmovntdq %%ymm2, (32 * 2)(%%rdi)\n\t" > + "vmovntdq %%ymm3, (32 * 3)(%%rdi)\n\t" > + "addq $(32*4), %%rdi\n\t" > + "cmpq $(32*4), %%rdx\n\t" > + "ja 116b\n\t" > + "sfence\n\t" > + /* Store the last 4 * VEC. */ > + "vmovdqu %%ymm5, (%%rcx)\n\t" > + "vmovdqu %%ymm6, -32(%%rcx)\n\t" > + "vmovdqu %%ymm7, -(32 * 2)(%%rcx)\n\t" > + "vmovdqu %%ymm8, -(32 * 3)(%%rcx)\n\t" > + /* Store the first VEC. */ > + "vmovdqu %%ymm4, (%%r11)\n\t" > + "vzeroupper\n\t" > + "jmp %l[done]\n\t" > + "117:\n\t" > + /* Don't use non-temporal store if there is overlap between */ > + /* destination and source since destination may be in cache */ > + /* when source is loaded. */ > + "leaq (%%rcx, %%rdx), %%r10\n\t" > + "cmpq %%r10, %%r9\n\t" > + "jb 114b\n\t" > + "118:\n\t" > + /* Copy 4 * VEC a time backward with non-temporal stores. */ > + "prefetcht0 (-32 * 4 * 2)(%%rcx)\n\t" > + "prefetcht0 (-32 * 4 * 2 - 64)(%%rcx)\n\t" > + "prefetcht0 (-32 * 4 * 3)(%%rcx)\n\t" > + "prefetcht0 (-32 * 4 * 3 - 64)(%%rcx)\n\t" > + "vmovdqu (%%rcx), %%ymm0\n\t" > + "vmovdqu -32(%%rcx), %%ymm1\n\t" > + "vmovdqu -(32 * 2)(%%rcx), %%ymm2\n\t" > + "vmovdqu -(32 * 3)(%%rcx), %%ymm3\n\t" > + "subq $(32*4), %%rcx\n\t" > + "subq $(32*4), %%rdx\n\t" > + "vmovntdq %%ymm0, (%%r9)\n\t" > + "vmovntdq %%ymm1, -32(%%r9)\n\t" > + "vmovntdq %%ymm2, -(32 * 2)(%%r9)\n\t" > + "vmovntdq %%ymm3, -(32 * 3)(%%r9)\n\t" > + "subq $(32 * 4), %%r9\n\t" > + "cmpq $(32 * 4), %%rdx\n\t" > + "ja 118b\n\t" > + "sfence\n\t" > + /* Store the first 4 * VEC. */ > + "vmovdqu %%ymm4, (%%rdi)\n\t" > + "vmovdqu %%ymm5, 32(%%rdi)\n\t" > + "vmovdqu %%ymm6, (32 * 2)(%%rdi)\n\t" > + "vmovdqu %%ymm7, (32 * 3)(%%rdi)\n\t" > + /* Store the last VEC. */ > + "vmovdqu %%ymm8, (%%r11)\n\t" > + "vzeroupper\n\t" > + "jmp %l[done]" > + : > + : "r"(src), "r"(dst), "r"(len) > + : "rax", "rcx", "rdx", "rdi", "rsi", "r8", "r9", "r10", "r11", "r12= ", "ymm0", > + "ymm1", "ymm2", "ymm3", "ymm4", "ymm5", "ymm6", "ymm7", "ymm8", "me= mory" > + : done > + ); > +done: > + return dst; > +}