From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32436A0C43; Thu, 21 Oct 2021 19:40:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4C904003F; Thu, 21 Oct 2021 19:40:30 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id B5C9F4003E for ; Thu, 21 Oct 2021 19:40:28 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10144"; a="292561662" X-IronPort-AV: E=Sophos;i="5.87,170,1631602800"; d="scan'208";a="292561662" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2021 10:40:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,170,1631602800"; d="scan'208";a="445433565" Received: from orsmsx605.amr.corp.intel.com ([10.22.229.18]) by orsmga003.jf.intel.com with ESMTP; 21 Oct 2021 10:40:27 -0700 Received: from orsmsx607.amr.corp.intel.com (10.22.229.20) by ORSMSX605.amr.corp.intel.com (10.22.229.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Thu, 21 Oct 2021 10:40:26 -0700 Received: from orsmsx604.amr.corp.intel.com (10.22.229.17) by ORSMSX607.amr.corp.intel.com (10.22.229.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Thu, 21 Oct 2021 10:40:26 -0700 Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by orsmsx604.amr.corp.intel.com (10.22.229.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12 via Frontend Transport; Thu, 21 Oct 2021 10:40:26 -0700 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.175) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.12; Thu, 21 Oct 2021 10:40:25 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Tu9G+DE7IOjCgiCJrd6y/EFx4nucWnaIUGeUsv6TUnBhIcztyC1A1rf98Qoi+ixI1CpS+c46G91NRN1OlT+I2GNauQFJQoAxh7f6SurNSHOah9BcCeLjOL0+4+Ai/7zQz7x8t5d2UNH/L3NagnMBa6rexdM0QbVJ3ZNOMhlCTHtlv3tWXaP3nx2LwA4N+Jko+sCrVt3KGMLXesJ0lg7EDHF1JqDZ3g+DtNKy3D/WN9rCCkK3n6CSpU40FtG22ZNtSPNLeohjJaZllcJ2RJkZB1m4s4A7PUYxsFVbe6gZmjG9+acWN6MzRDIsSWLacSTWu+98G2vc792hdo97OeueBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uOHeGtozZ2AdgJWR8KQZq3z0oa+rtvqbtg1S6v3kQq8=; b=Z1nQxwEmBrSfSzMvOsieOaPizXlz1A+Q26FL9PVlh8hhXAL+ZELUK/BT/6YY+P4MRC9ZmyQoVZQb40ZZ2GOGeMRDGjvwava82Z2dfouXmDrgmXuqo52yeHoBVYvMzSbN/RSWfUfNs0KYzX1ajgcmmSd9Cu/tFnI0We+sYQ4H6KvAjhPeojurac7iA2Y5M3/a3kG4R16adhfHg+IXJ2THSU5Ks07tW/StdycHNuGmSIfDKwdN+HZ86/tHpOHJ/Fwfvc6+/W2SvWaCOlxuZbv2a0oN0g+/mE5nGWoAoucrH+4JNLiBrVjTpCZB8LtzEkhTAO7GmgTkzb8t4rxFz1Wd8g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uOHeGtozZ2AdgJWR8KQZq3z0oa+rtvqbtg1S6v3kQq8=; b=W2E5lOR5mmGq2wKxHleasC32JTNuOIv9SOW+Z6448yPZsA7a8H//dApMBq9rTebOZFvHPm4eM3O6uJEoxNgqA2zqyrt+T6K3iZBqARiTTzHa2QML0I1BLSltT9/5M1PR27tc7jdwFiPPFUgKrOsxcIg4LYZXbCzCvwmUW6oeai8= Received: from DM6PR11MB4491.namprd11.prod.outlook.com (2603:10b6:5:204::19) by DM6PR11MB3225.namprd11.prod.outlook.com (2603:10b6:5:5b::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.18; Thu, 21 Oct 2021 17:40:22 +0000 Received: from DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::2c0c:5383:f814:3b4e]) by DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::2c0c:5383:f814:3b4e%6]) with mapi id 15.20.4628.018; Thu, 21 Oct 2021 17:40:21 +0000 From: "Ananyev, Konstantin" To: "Song, Keesang" , Thomas Monjalon , Aman Kumar CC: "dev@dpdk.org" , "rasland@nvidia.com" , "asafp@nvidia.com" , "shys@nvidia.com" , "viacheslavo@nvidia.com" , "akozyrev@nvidia.com" , "matan@nvidia.com" , "Burakov, Anatoly" , "aman.kumar@vvdntech.in" , "jerinjacobk@gmail.com" , "Richardson, Bruce" , "david.marchand@redhat.com" Thread-Topic: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy routine to eal Thread-Index: AQHXxNbTC2FXX6Ru90mS0w2MnNrjNqvaQKyAgANywYCAAAZ3sA== Date: Thu, 21 Oct 2021 17:40:21 +0000 Message-ID: References: <20210823084411.29592-1-aman.kumar@vvdntech.in> <20211019104724.19416-1-aman.kumar@vvdntech.in> <1936725.1xRPv8SPVf@thomas> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_ActionId=02602cd9-ebef-46b4-8668-061eb608129c; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_ContentBits=0; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Enabled=true; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Method=Standard; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_Name=AMD Official Use Only-AIP 2.0; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_SetDate=2021-10-21T17:06:38Z; MSIP_Label_88914ebd-7e6c-4e12-a031-a9906be2db14_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: amd.com; dkim=none (message not signed) header.d=none;amd.com; dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 985141f3-088a-42fb-2d01-08d994b9db2f x-ms-traffictypediagnostic: DM6PR11MB3225: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:4125; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: JB2jsu5ySDoXbNwhp+Mt7/OZOBFYZXicwj2UVLoqzvqOohy7X7yv1n6ktaj1nqHNZ09aoQeW9TpY/hrQFr4kBuOCBCFhcKm8brahpts+N8mSRGXwTT79NmL/+N7e98G8OrgMe1tV/xfHZUDXXcZOeWYNCSOAqqBHZDxsQW76Y1Awuciq6+ynS+dvjz7DqX/AVwYEz6tVhk53Kr/e6Df9U8y4dbIH/Au/cdR3t5Oc2C4Y0Wlb+Vy7ui9FOuaRYvKoRwRRVXpmDwLI43zWZrqfKQKAiRAq1TqQmkUoCuBlBXW/qj12n3Z96MDhfvF6bYLQSeX6nnmbo7+wT67NRE+jwiElFOIGWSN+cntpNMwm+RwNPX8POFf3J7h+92UrgaBk1Driusfueuekk3fCZCSCawMVSZeIDZjhzifTp4PjFmC/yXu2U0XGOod019O0lId7rpAf14VEuS2o64QPrZDmOieKYh6ilu/aU5qNUuo0OH00dcHPTgTVbMr/sPim7PPwPevrVdJirwXBSGMYOnaIGb7f78mopNBcPU7HRe1k4IGDg4fCXBgV4+3j/gdthxctMsZDiVolRqal9z1YBCc2WHqnagQznku5eWBjUJiPv+IBGW3hBKedLzOBq3BG7lK+qSOPA727/Z6ZvnlnpTXaSKeKbe0j9nhFtgrvcQIAqWnBfSaX8fP0YdN+xC9bF2vF6AGRhxRzGvWdfZDNlkmD2A== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR11MB4491.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(366004)(7416002)(55236004)(53546011)(33656002)(508600001)(316002)(26005)(6506007)(8676002)(30864003)(7696005)(38070700005)(186003)(9686003)(66556008)(5660300002)(86362001)(54906003)(71200400001)(38100700002)(66946007)(110136005)(55016002)(2906002)(8936002)(64756008)(66446008)(122000001)(82960400001)(66476007)(83380400001)(76116006)(52536014)(4326008); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?5iqLCIjJLwa7g2Bjro/LLlH/8qqfpDb7Nr3+rpVSc4Nhpbf6TgYeV6QigPXe?= =?us-ascii?Q?ZrSs0yOWkb39g47k0IeCnMcAHkBA77WpI1kYWNRsI14sr0AtQFvGhKbPjF/4?= =?us-ascii?Q?AFCtGsJ4s0IUr1l460iwxnd5P40Owv8ABX3GgF2WdYAwyTEozh3LhToWgRLF?= =?us-ascii?Q?cSjYdnG7Oj+T0tScoTicFA3oAS5SGjl/kWzrVw1spG0Kfm2d64bBjcL6jiWW?= =?us-ascii?Q?sqTHVJV+6tYctykD79QtYVQnvVGyc3VjQwOYQgiQdh2bgU80gsR7qWesQtfr?= =?us-ascii?Q?uBbA1EN5r5UeTbwhJ3PFUHKnq9lOYQYYEwjVol861xUqRH1MRXD6wbn77+r/?= =?us-ascii?Q?wJ/oYlVQN5aEK6t4G6gnHXZ8eU5RHGkG5KpdKrQ4OWPfKABH1oVRcC/25vHA?= =?us-ascii?Q?osRxX0XTLly/T8c/Y4a18zTulEptW728mVClKXVYrB/UDpfBwbyvKcyxx2OU?= =?us-ascii?Q?+ygjqqeRTpZ6PtleXzbq7z6CS4cnRZHOLo/JEoo40hjwxxoaLt8fThpsfqxK?= =?us-ascii?Q?UrRu3zDyX78nrxbbPI9C67rMSptvGhatkrEXcLPbjfaiRPWetFmBOfBRj6io?= =?us-ascii?Q?0cpHjZJD5cfqCs3TOV6k93xr/SWn8KDXtNoysrLBucUiVQwQPbSixD0DKlIV?= =?us-ascii?Q?xAyYAjbMPEaOLqjiGKnPXAcN8LZv7EzN3zpXeO5sb5leInvYYu1huAHlZbMr?= =?us-ascii?Q?qQFEzfpTyVJNClxDPDJlE07R+7p88yZa9TRPqPokzmxcYqF/RFdLvlJmEe6C?= =?us-ascii?Q?SCfrQJgDIDGlyxfGE5zdBmDADTGVEC9ShuIxWpPF02wu76lPCfchgqHFPrht?= =?us-ascii?Q?H2wRnznYQWf3NeJ8c6WPKPJppFbbj+dVNJrP04ohSa/HHfdtsNsNkkL8DOkz?= =?us-ascii?Q?OFp5lk7WdJJ0Bjt41DlU9NgIPP9nQ9sbgBlnMoDZXv/UfvkmumvLZ+v8s88B?= =?us-ascii?Q?eQ8D16ydc9Fs4GilnDiOgUdoAs+8Gdh5OuyN4vkVU/H1k5A0S9nokpyonILZ?= =?us-ascii?Q?DMKBZQs1AV9eHtVeV+kR99LXwG6gk3w9S3weE5tRMkJuCV6FJM3TLTBZZl0g?= =?us-ascii?Q?BTo4f6VoDljUUX8pSXN7fQB/vwuQwaNU7AabJRu9RfnojhY1bklzkGP1v9gY?= =?us-ascii?Q?TtsrXl4GvAZogPksQH21va8+wwVMrLO52jpB8GY9/1ItyocHcp3Tbmc4yrWc?= =?us-ascii?Q?7j37J978uQFB1kZgmgcSJQ8B7CMjZL9HAZwhi5JYPpYWOscqAmg9szIkzVc8?= =?us-ascii?Q?Hfw5zyf844Yw6v9iJhq7Sh+jVgd/5oyGh4mXuEiDDyPq0oDKstzM6bmo3RNa?= =?us-ascii?Q?0Nmb92sRCa6BUa+s6TLnb5GX?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM6PR11MB4491.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 985141f3-088a-42fb-2d01-08d994b9db2f X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Oct 2021 17:40:21.5359 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: konstantin.ananyev@intel.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR11MB3225 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy routine to eal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" >=20 > Hi Thomas, >=20 > I hope this can make some explanation to your question. > We(AMD Linux library support team) have implemented the custom tailored m= emcpy solution which is a close match with DPDK use case > requirements like the below. > 1) Min 64B length data packet with cache aligned Source and Destinat= ion. > 2) Non-Temporal load and temporal store for cache aligned source for= both RX and TX paths. Could not implement the non-temporal > store for TX_PATH, as non-Temporal load/stores works only with 32B aligne= d addresses for AVX2 > 3) This solution works for all AVX2 supported AMD machines. >=20 > Internally we have completed the integrity testing and benchmarking of th= e solution and found gains of 8.4% to 14.5% specifically on Milan > CPU(3rd Gen of EPYC Processor) It still not clear to me why it has to be written in assembler. Why similar stuff can't be written in C with instincts, as rest of rte_memc= py.h does? >=20 > Thanks for your support, > Keesang >=20 > -----Original Message----- > From: Thomas Monjalon > Sent: Tuesday, October 19, 2021 5:31 AM > To: Aman Kumar > Cc: dev@dpdk.org; rasland@nvidia.com; asafp@nvidia.com; shys@nvidia.com; = viacheslavo@nvidia.com; akozyrev@nvidia.com; > matan@nvidia.com; anatoly.burakov@intel.com; Song, Keesang ; aman.kumar@vvdntech.in; > jerinjacobk@gmail.com; bruce.richardson@intel.com; konstantin.ananyev@int= el.com; david.marchand@redhat.com > Subject: Re: [dpdk-dev] [PATCH v2 1/2] lib/eal: add amd epyc2 memcpy rout= ine to eal >=20 > [CAUTION: External Email] >=20 > 19/10/2021 12:47, Aman Kumar: > > This patch provides rte_memcpy* calls optimized for AMD EPYC > > platforms. Use config/x86/x86_amd_epyc_linux_gcc as cross-file with > > meson to build dpdk for AMD EPYC platforms. >=20 > Please split in 2 patches: platform & memcpy. >=20 > What optimization is specific to EPYC? >=20 > I dislike the asm code below. > What is AMD specific inside? > Can it use compiler intrinsics as it is done elsewhere? >=20 > > +static __rte_always_inline void * > > +rte_memcpy_aligned_ntload_tstore16_amdepyc2(void *dst, > > + const void *src, > > + size_t size) { > > + asm volatile goto("movq %0, %%rsi\n\t" > > + "movq %1, %%rdi\n\t" > > + "movq %2, %%rdx\n\t" > > + "cmpq $(128), %%rdx\n\t" > > + "jb 202f\n\t" > > + "201:\n\t" > > + "vmovntdqa (%%rsi), %%ymm0\n\t" > > + "vmovntdqa 32(%%rsi), %%ymm1\n\t" > > + "vmovntdqa 64(%%rsi), %%ymm2\n\t" > > + "vmovntdqa 96(%%rsi), %%ymm3\n\t" > > + "vmovdqu %%ymm0, (%%rdi)\n\t" > > + "vmovdqu %%ymm1, 32(%%rdi)\n\t" > > + "vmovdqu %%ymm2, 64(%%rdi)\n\t" > > + "vmovdqu %%ymm3, 96(%%rdi)\n\t" > > + "addq $128, %%rsi\n\t" > > + "addq $128, %%rdi\n\t" > > + "subq $128, %%rdx\n\t" > > + "jz %l[done]\n\t" > > + "cmpq $128, %%rdx\n\t" /*Vector Size 32B. */ > > + "jae 201b\n\t" > > + "202:\n\t" > > + "cmpq $64, %%rdx\n\t" > > + "jb 203f\n\t" > > + "vmovntdqa (%%rsi), %%ymm0\n\t" > > + "vmovntdqa 32(%%rsi), %%ymm1\n\t" > > + "vmovdqu %%ymm0, (%%rdi)\n\t" > > + "vmovdqu %%ymm1, 32(%%rdi)\n\t" > > + "addq $64, %%rsi\n\t" > > + "addq $64, %%rdi\n\t" > > + "subq $64, %%rdx\n\t" > > + "jz %l[done]\n\t" > > + "203:\n\t" > > + "cmpq $32, %%rdx\n\t" > > + "jb 204f\n\t" > > + "vmovntdqa (%%rsi), %%ymm0\n\t" > > + "vmovdqu %%ymm0, (%%rdi)\n\t" > > + "addq $32, %%rsi\n\t" > > + "addq $32, %%rdi\n\t" > > + "subq $32, %%rdx\n\t" > > + "jz %l[done]\n\t" > > + "204:\n\t" > > + "cmpb $16, %%dl\n\t" > > + "jb 205f\n\t" > > + "vmovntdqa (%%rsi), %%xmm0\n\t" > > + "vmovdqu %%xmm0, (%%rdi)\n\t" > > + "addq $16, %%rsi\n\t" > > + "addq $16, %%rdi\n\t" > > + "subq $16, %%rdx\n\t" > > + "jz %l[done]\n\t" > > + "205:\n\t" > > + "cmpb $2, %%dl\n\t" > > + "jb 208f\n\t" > > + "cmpb $4, %%dl\n\t" > > + "jbe 207f\n\t" > > + "cmpb $8, %%dl\n\t" > > + "jbe 206f\n\t" > > + "movq -8(%%rsi,%%rdx), %%rcx\n\t" > > + "movq (%%rsi), %%rsi\n\t" > > + "movq %%rcx, -8(%%rdi,%%rdx)\n\t" > > + "movq %%rsi, (%%rdi)\n\t" > > + "jmp %l[done]\n\t" > > + "206:\n\t" > > + "movl -4(%%rsi,%%rdx), %%ecx\n\t" > > + "movl (%%rsi), %%esi\n\t" > > + "movl %%ecx, -4(%%rdi,%%rdx)\n\t" > > + "movl %%esi, (%%rdi)\n\t" > > + "jmp %l[done]\n\t" > > + "207:\n\t" > > + "movzwl -2(%%rsi,%%rdx), %%ecx\n\t" > > + "movzwl (%%rsi), %%esi\n\t" > > + "movw %%cx, -2(%%rdi,%%rdx)\n\t" > > + "movw %%si, (%%rdi)\n\t" > > + "jmp %l[done]\n\t" > > + "208:\n\t" > > + "movzbl (%%rsi), %%ecx\n\t" > > + "movb %%cl, (%%rdi)" > > + : > > + : "r"(src), "r"(dst), "r"(size) > > + : "rcx", "rdx", "rsi", "rdi", "ymm0", "ymm1", "ymm2", "ymm3", "me= mory" > > + : done > > + ); > > +done: > > + return dst; > > +} > > + > > +static __rte_always_inline void * > > +rte_memcpy_generic(void *dst, const void *src, size_t len) { > > + asm goto("movq %0, %%rsi\n\t" > > + "movq %1, %%rdi\n\t" > > + "movq %2, %%rdx\n\t" > > + "movq %%rdi, %%rax\n\t" > > + "cmp $32, %%rdx\n\t" > > + "jb 101f\n\t" > > + "cmp $(32 * 2), %%rdx\n\t" > > + "ja 108f\n\t" > > + "vmovdqu (%%rsi), %%ymm0\n\t" > > + "vmovdqu -32(%%rsi,%%rdx), %%ymm1\n\t" > > + "vmovdqu %%ymm0, (%%rdi)\n\t" > > + "vmovdqu %%ymm1, -32(%%rdi,%%rdx)\n\t" > > + "vzeroupper\n\t" > > + "jmp %l[done]\n\t" > > + "101:\n\t" > > + /* Less than 1 VEC. */ > > + "cmpb $32, %%dl\n\t" > > + "jae 103f\n\t" > > + "cmpb $16, %%dl\n\t" > > + "jae 104f\n\t" > > + "cmpb $8, %%dl\n\t" > > + "jae 105f\n\t" > > + "cmpb $4, %%dl\n\t" > > + "jae 106f\n\t" > > + "cmpb $1, %%dl\n\t" > > + "ja 107f\n\t" > > + "jb 102f\n\t" > > + "movzbl (%%rsi), %%ecx\n\t" > > + "movb %%cl, (%%rdi)\n\t" > > + "102:\n\t" > > + "jmp %l[done]\n\t" > > + "103:\n\t" > > + /* From 32 to 63. No branch when size =3D=3D 32. */ > > + "vmovdqu (%%rsi), %%ymm0\n\t" > > + "vmovdqu -32(%%rsi,%%rdx), %%ymm1\n\t" > > + "vmovdqu %%ymm0, (%%rdi)\n\t" > > + "vmovdqu %%ymm1, -32(%%rdi,%%rdx)\n\t" > > + "vzeroupper\n\t" > > + "jmp %l[done]\n\t" > > + /* From 16 to 31. No branch when size =3D=3D 16. */ > > + "104:\n\t" > > + "vmovdqu (%%rsi), %%xmm0\n\t" > > + "vmovdqu -16(%%rsi,%%rdx), %%xmm1\n\t" > > + "vmovdqu %%xmm0, (%%rdi)\n\t" > > + "vmovdqu %%xmm1, -16(%%rdi,%%rdx)\n\t" > > + "jmp %l[done]\n\t" > > + "105:\n\t" > > + /* From 8 to 15. No branch when size =3D=3D 8. */ > > + "movq -8(%%rsi,%%rdx), %%rcx\n\t" > > + "movq (%%rsi), %%rsi\n\t" > > + "movq %%rcx, -8(%%rdi,%%rdx)\n\t" > > + "movq %%rsi, (%%rdi)\n\t" > > + "jmp %l[done]\n\t" > > + "106:\n\t" > > + /* From 4 to 7. No branch when size =3D=3D 4. */ > > + "movl -4(%%rsi,%%rdx), %%ecx\n\t" > > + "movl (%%rsi), %%esi\n\t" > > + "movl %%ecx, -4(%%rdi,%%rdx)\n\t" > > + "movl %%esi, (%%rdi)\n\t" > > + "jmp %l[done]\n\t" > > + "107:\n\t" > > + /* From 2 to 3. No branch when size =3D=3D 2. */ > > + "movzwl -2(%%rsi,%%rdx), %%ecx\n\t" > > + "movzwl (%%rsi), %%esi\n\t" > > + "movw %%cx, -2(%%rdi,%%rdx)\n\t" > > + "movw %%si, (%%rdi)\n\t" > > + "jmp %l[done]\n\t" > > + "108:\n\t" > > + /* More than 2 * VEC and there may be overlap between destination= */ > > + /* and source. */ > > + "cmpq $(32 * 8), %%rdx\n\t" > > + "ja 111f\n\t" > > + "cmpq $(32 * 4), %%rdx\n\t" > > + "jb 109f\n\t" > > + /* Copy from 4 * VEC to 8 * VEC, inclusively. */ > > + "vmovdqu (%%rsi), %%ymm0\n\t" > > + "vmovdqu 32(%%rsi), %%ymm1\n\t" > > + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" > > + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" > > + "vmovdqu -32(%%rsi,%%rdx), %%ymm4\n\t" > > + "vmovdqu -(32 * 2)(%%rsi,%%rdx), %%ymm5\n\t" > > + "vmovdqu -(32 * 3)(%%rsi,%%rdx), %%ymm6\n\t" > > + "vmovdqu -(32 * 4)(%%rsi,%%rdx), %%ymm7\n\t" > > + "vmovdqu %%ymm0, (%%rdi)\n\t" > > + "vmovdqu %%ymm1, 32(%%rdi)\n\t" > > + "vmovdqu %%ymm2, (32 * 2)(%%rdi)\n\t" > > + "vmovdqu %%ymm3, (32 * 3)(%%rdi)\n\t" > > + "vmovdqu %%ymm4, -32(%%rdi,%%rdx)\n\t" > > + "vmovdqu %%ymm5, -(32 * 2)(%%rdi,%%rdx)\n\t" > > + "vmovdqu %%ymm6, -(32 * 3)(%%rdi,%%rdx)\n\t" > > + "vmovdqu %%ymm7, -(32 * 4)(%%rdi,%%rdx)\n\t" > > + "vzeroupper\n\t" > > + "jmp %l[done]\n\t" > > + "109:\n\t" > > + /* Copy from 2 * VEC to 4 * VEC. */ > > + "vmovdqu (%%rsi), %%ymm0\n\t" > > + "vmovdqu 32(%%rsi), %%ymm1\n\t" > > + "vmovdqu -32(%%rsi,%%rdx), %%ymm2\n\t" > > + "vmovdqu -(32 * 2)(%%rsi,%%rdx), %%ymm3\n\t" > > + "vmovdqu %%ymm0, (%%rdi)\n\t" > > + "vmovdqu %%ymm1, 32(%%rdi)\n\t" > > + "vmovdqu %%ymm2, -32(%%rdi,%%rdx)\n\t" > > + "vmovdqu %%ymm3, -(32 * 2)(%%rdi,%%rdx)\n\t" > > + "vzeroupper\n\t" > > + "110:\n\t" > > + "jmp %l[done]\n\t" > > + "111:\n\t" > > + "cmpq %%rsi, %%rdi\n\t" > > + "ja 113f\n\t" > > + /* Source =3D=3D destination is less common. */ > > + "je 110b\n\t" > > + /* Load the first VEC and last 4 * VEC to > > + * support overlapping addresses. > > + */ > > + "vmovdqu (%%rsi), %%ymm4\n\t" > > + "vmovdqu -32(%%rsi, %%rdx), %%ymm5\n\t" > > + "vmovdqu -(32 * 2)(%%rsi, %%rdx), %%ymm6\n\t" > > + "vmovdqu -(32 * 3)(%%rsi, %%rdx), %%ymm7\n\t" > > + "vmovdqu -(32 * 4)(%%rsi, %%rdx), %%ymm8\n\t" > > + /* Save start and stop of the destination buffer. */ > > + "movq %%rdi, %%r11\n\t" > > + "leaq -32(%%rdi, %%rdx), %%rcx\n\t" > > + /* Align destination for aligned stores in the loop. Compute */ > > + /* how much destination is misaligned. */ > > + "movq %%rdi, %%r8\n\t" > > + "andq $(32 - 1), %%r8\n\t" > > + /* Get the negative of offset for alignment. */ > > + "subq $32, %%r8\n\t" > > + /* Adjust source. */ > > + "subq %%r8, %%rsi\n\t" > > + /* Adjust destination which should be aligned now. */ > > + "subq %%r8, %%rdi\n\t" > > + /* Adjust length. */ > > + "addq %%r8, %%rdx\n\t" > > + /* Check non-temporal store threshold. */ > > + "cmpq $(1024*1024), %%rdx\n\t" > > + "ja 115f\n\t" > > + "112:\n\t" > > + /* Copy 4 * VEC a time forward. */ > > + "vmovdqu (%%rsi), %%ymm0\n\t" > > + "vmovdqu 32(%%rsi), %%ymm1\n\t" > > + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" > > + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" > > + "addq $(32 * 4), %%rsi\n\t" > > + "subq $(32 * 4), %%rdx\n\t" > > + "vmovdqa %%ymm0, (%%rdi)\n\t" > > + "vmovdqa %%ymm1, 32(%%rdi)\n\t" > > + "vmovdqa %%ymm2, (32 * 2)(%%rdi)\n\t" > > + "vmovdqa %%ymm3, (32 * 3)(%%rdi)\n\t" > > + "addq $(32 * 4), %%rdi\n\t" > > + "cmpq $(32 * 4), %%rdx\n\t" > > + "ja 112b\n\t" > > + /* Store the last 4 * VEC. */ > > + "vmovdqu %%ymm5, (%%rcx)\n\t" > > + "vmovdqu %%ymm6, -32(%%rcx)\n\t" > > + "vmovdqu %%ymm7, -(32 * 2)(%%rcx)\n\t" > > + "vmovdqu %%ymm8, -(32 * 3)(%%rcx)\n\t" > > + /* Store the first VEC. */ > > + "vmovdqu %%ymm4, (%%r11)\n\t" > > + "vzeroupper\n\t" > > + "jmp %l[done]\n\t" > > + "113:\n\t" > > + /* Load the first 4*VEC and last VEC to support overlapping addre= sses.*/ > > + "vmovdqu (%%rsi), %%ymm4\n\t" > > + "vmovdqu 32(%%rsi), %%ymm5\n\t" > > + "vmovdqu (32 * 2)(%%rsi), %%ymm6\n\t" > > + "vmovdqu (32 * 3)(%%rsi), %%ymm7\n\t" > > + "vmovdqu -32(%%rsi,%%rdx), %%ymm8\n\t" > > + /* Save stop of the destination buffer. */ > > + "leaq -32(%%rdi, %%rdx), %%r11\n\t" > > + /* Align destination end for aligned stores in the loop. Compute= */ > > + /* how much destination end is misaligned. */ > > + "leaq -32(%%rsi, %%rdx), %%rcx\n\t" > > + "movq %%r11, %%r9\n\t" > > + "movq %%r11, %%r8\n\t" > > + "andq $(32 - 1), %%r8\n\t" > > + /* Adjust source. */ > > + "subq %%r8, %%rcx\n\t" > > + /* Adjust the end of destination which should be aligned now. */ > > + "subq %%r8, %%r9\n\t" > > + /* Adjust length. */ > > + "subq %%r8, %%rdx\n\t" > > + /* Check non-temporal store threshold. */ > > + "cmpq $(1024*1024), %%rdx\n\t" > > + "ja 117f\n\t" > > + "114:\n\t" > > + /* Copy 4 * VEC a time backward. */ > > + "vmovdqu (%%rcx), %%ymm0\n\t" > > + "vmovdqu -32(%%rcx), %%ymm1\n\t" > > + "vmovdqu -(32 * 2)(%%rcx), %%ymm2\n\t" > > + "vmovdqu -(32 * 3)(%%rcx), %%ymm3\n\t" > > + "subq $(32 * 4), %%rcx\n\t" > > + "subq $(32 * 4), %%rdx\n\t" > > + "vmovdqa %%ymm0, (%%r9)\n\t" > > + "vmovdqa %%ymm1, -32(%%r9)\n\t" > > + "vmovdqa %%ymm2, -(32 * 2)(%%r9)\n\t" > > + "vmovdqa %%ymm3, -(32 * 3)(%%r9)\n\t" > > + "subq $(32 * 4), %%r9\n\t" > > + "cmpq $(32 * 4), %%rdx\n\t" > > + "ja 114b\n\t" > > + /* Store the first 4 * VEC. */ > > + "vmovdqu %%ymm4, (%%rdi)\n\t" > > + "vmovdqu %%ymm5, 32(%%rdi)\n\t" > > + "vmovdqu %%ymm6, (32 * 2)(%%rdi)\n\t" > > + "vmovdqu %%ymm7, (32 * 3)(%%rdi)\n\t" > > + /* Store the last VEC. */ > > + "vmovdqu %%ymm8, (%%r11)\n\t" > > + "vzeroupper\n\t" > > + "jmp %l[done]\n\t" > > + > > + "115:\n\t" > > + /* Don't use non-temporal store if there is overlap between */ > > + /* destination and source since destination may be in cache */ > > + /* when source is loaded. */ > > + "leaq (%%rdi, %%rdx), %%r10\n\t" > > + "cmpq %%r10, %%rsi\n\t" > > + "jb 112b\n\t" > > + "116:\n\t" > > + /* Copy 4 * VEC a time forward with non-temporal stores. */ > > + "prefetcht0 (32*4*2)(%%rsi)\n\t" > > + "prefetcht0 (32*4*2 + 64)(%%rsi)\n\t" > > + "prefetcht0 (32*4*3)(%%rsi)\n\t" > > + "prefetcht0 (32*4*3 + 64)(%%rsi)\n\t" > > + "vmovdqu (%%rsi), %%ymm0\n\t" > > + "vmovdqu 32(%%rsi), %%ymm1\n\t" > > + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" > > + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" > > + "addq $(32*4), %%rsi\n\t" > > + "subq $(32*4), %%rdx\n\t" > > + "vmovntdq %%ymm0, (%%rdi)\n\t" > > + "vmovntdq %%ymm1, 32(%%rdi)\n\t" > > + "vmovntdq %%ymm2, (32 * 2)(%%rdi)\n\t" > > + "vmovntdq %%ymm3, (32 * 3)(%%rdi)\n\t" > > + "addq $(32*4), %%rdi\n\t" > > + "cmpq $(32*4), %%rdx\n\t" > > + "ja 116b\n\t" > > + "sfence\n\t" > > + /* Store the last 4 * VEC. */ > > + "vmovdqu %%ymm5, (%%rcx)\n\t" > > + "vmovdqu %%ymm6, -32(%%rcx)\n\t" > > + "vmovdqu %%ymm7, -(32 * 2)(%%rcx)\n\t" > > + "vmovdqu %%ymm8, -(32 * 3)(%%rcx)\n\t" > > + /* Store the first VEC. */ > > + "vmovdqu %%ymm4, (%%r11)\n\t" > > + "vzeroupper\n\t" > > + "jmp %l[done]\n\t" > > + "117:\n\t" > > + /* Don't use non-temporal store if there is overlap between */ > > + /* destination and source since destination may be in cache */ > > + /* when source is loaded. */ > > + "leaq (%%rcx, %%rdx), %%r10\n\t" > > + "cmpq %%r10, %%r9\n\t" > > + "jb 114b\n\t" > > + "118:\n\t" > > + /* Copy 4 * VEC a time backward with non-temporal stores. */ > > + "prefetcht0 (-32 * 4 * 2)(%%rcx)\n\t" > > + "prefetcht0 (-32 * 4 * 2 - 64)(%%rcx)\n\t" > > + "prefetcht0 (-32 * 4 * 3)(%%rcx)\n\t" > > + "prefetcht0 (-32 * 4 * 3 - 64)(%%rcx)\n\t" > > + "vmovdqu (%%rcx), %%ymm0\n\t" > > + "vmovdqu -32(%%rcx), %%ymm1\n\t" > > + "vmovdqu -(32 * 2)(%%rcx), %%ymm2\n\t" > > + "vmovdqu -(32 * 3)(%%rcx), %%ymm3\n\t" > > + "subq $(32*4), %%rcx\n\t" > > + "subq $(32*4), %%rdx\n\t" > > + "vmovntdq %%ymm0, (%%r9)\n\t" > > + "vmovntdq %%ymm1, -32(%%r9)\n\t" > > + "vmovntdq %%ymm2, -(32 * 2)(%%r9)\n\t" > > + "vmovntdq %%ymm3, -(32 * 3)(%%r9)\n\t" > > + "subq $(32 * 4), %%r9\n\t" > > + "cmpq $(32 * 4), %%rdx\n\t" > > + "ja 118b\n\t" > > + "sfence\n\t" > > + /* Store the first 4 * VEC. */ > > + "vmovdqu %%ymm4, (%%rdi)\n\t" > > + "vmovdqu %%ymm5, 32(%%rdi)\n\t" > > + "vmovdqu %%ymm6, (32 * 2)(%%rdi)\n\t" > > + "vmovdqu %%ymm7, (32 * 3)(%%rdi)\n\t" > > + /* Store the last VEC. */ > > + "vmovdqu %%ymm8, (%%r11)\n\t" > > + "vzeroupper\n\t" > > + "jmp %l[done]" > > + : > > + : "r"(src), "r"(dst), "r"(len) > > + : "rax", "rcx", "rdx", "rdi", "rsi", "r8", "r9", "r10", "r11", "r= 12", "ymm0", > > + "ymm1", "ymm2", "ymm3", "ymm4", "ymm5", "ymm6", "ymm7", "ymm8", "= memory" > > + : done > > + ); > > +done: > > + return dst; > > +} >=20 >=20