From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0064.outbound.protection.outlook.com [104.47.36.64]) by dpdk.org (Postfix) with ESMTP id 518251E34 for ; Fri, 28 Apr 2017 11:34:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=JdbKGiBuBO6xOMmL0xhjaHu4uzhluNAEC7sZzTpG/EU=; b=SmJfioX3l+M73RrAQyyv/7ZbaKDuxFHqUXQ4arG0aB5dtNGZFofbZCvKWmMHvr0nMVMdX74btDebFpr/LtpcfeTIks351bLg9dwe0mUzczWXHs5EXaVKyQI64XoHTyMdtIupERg1rN2MbDKlafEfVo/D0BaIBu5gM5ospmlglOY= Authentication-Results: monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=none action=none header.from=caviumnetworks.com; Received: from 1scrb-1.caveonetworks.com (50.233.148.156) by CY1PR07MB2427.namprd07.prod.outlook.com (10.166.195.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1047.13; Fri, 28 Apr 2017 09:34:28 +0000 From: Ashwin Sekhar T K To: thomas@monjalon.net, jasvinder.singh@intel.com, viktorin@rehivetech.com, jerin.jacob@caviumnetworks.com, jianbo.liu@linaro.org Cc: dev@dpdk.org, Ashwin Sekhar T K Date: Fri, 28 Apr 2017 02:34:16 -0700 Message-Id: <20170428093417.48703-1-ashwin.sekhar@caviumnetworks.com> X-Mailer: git-send-email 2.13.0.rc1 In-Reply-To: <20170427141021.18767-1-ashwin.sekhar@caviumnetworks.com> References: <20170427141021.18767-1-ashwin.sekhar@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [50.233.148.156] X-ClientProxiedBy: MWHPR1201CA0001.namprd12.prod.outlook.com (10.174.253.11) To CY1PR07MB2427.namprd07.prod.outlook.com (10.166.195.16) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 566c92fc-e03f-42a6-5667-08d48e19c4ce X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(201703131423075)(201703031133081); SRVR:CY1PR07MB2427; X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2427; 3:PTZvYqk/Zv+sigsg/IgpvqkjtEaCs7yxTWU1FwTSR1iC7Fb53j9WQuAAEZcurswX2BCtygPecAd1chDxG3TAJrXZ8AbqFHb5Z2ljv1PYLU9ePmpA5UvL/onHilyFE9PEtLOkmf93OVNKzKUjmi5vlLQGrTmktWZK8wO8UeQF0Bd+fzy1XDpVVV3NnmD1L1em9ai3Gspg+/6G6FXdI9BcbOu38/tHJ1b1TBvnYVdZaAlJFEezGb8LHzjIo7OjUE0wm68o2WAiENWymEtep4nfZbsi7mjH1L5G5zINJPTwwUQeZBtwLX8kbljf/8mbro3fgAEDw7mCzPnX60LXf+3jbg==; 25:vIOI+ufziGnEAe4bq3Ij2utGnVWfR0eVswypCTTT6F1uOfIYT5wepPUEFZ39sIh8WnDimvFxIQ6BuHEen1vThVwwxRYbpdH0By42ZjDfGN0TcX/eL0ntHVYwRB8duoSeytG3W1xfYJkiftZ9Z3q3MgAJ+WgF6e54Ht9WsetrL+OyWfdhXamp3ER5HCx8VTszr2F1ZpGSNAnoMEAvQKNNnH2sig3DdSucX+jNMX2qLkl1lr0gAeFS3guE0VieRxkRsbqT4P9nCMq5d7Fyd6UuyIL8EK7J+CCj/ywyoI3usigiS6W8Zz7XeoU+eTSb2QJuEprONdRyPBYPlGD57l7+R9JvC1b3tE4JCl9NaBqzZrTIXwizHqheMXxLyG0nWrjtESaiN29eiieGtwuSQIriiYJq24xFzOdg1QBZlC9ZL1lMAJnGXk5qIeSOphMziOQs89teScmwYzJyrTDZTJbS7A== X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2427; 31:+XsYPJ5x7gt4ylGPFEehRklr1HcisZO9U18m12pRW7RB+nlsJKubYQRUowH17ci+RQLS4/h7sixRt90oqFLzEao+SgkHaFwvlUWnmPjyfhb5f/JrrCufRqRYdDlCeQGhP9xFPbt62VF/zxymCKM0TNKLETxYaQFzWk0NagsWXzl+AUoFallwAdVRoDYez4sdHokO2t6GB2HXUhyVqzNiR/owLfK3Wlwq+C09U4NrrirLmnfNr/6RZg56vvy87l/R; 20:t1juysWVqnYbyOU9JjikyNX5t8ndaLeIvDfeBRU1yXvQunziTU8E619jiwyBTc+JP7r3Ji3NAq253W7NXJZNSSrdj4sTS1dKy1x3WQrKv0EQmvXyDFCk+zA8zG8OmTzLA66InLRcNAnmmKkNC3Syy6T4p9xfqeCe5kd+FTJLXjdwEiA/kgNjRL68z1zbphOJQuv5yY4gnU+K42P4J0QSmlUQV70MsYrcRBAduFwGKAhW/hEggUCMxSgJKC3kkPi4YJfKDuBS+Hf10l3/sBXqYJfd9H0U46UYV7fOb/Rz2fJAlVv4l69+Ku3zfzkee5Xh39gbJCUBEgv1BC1LezKmvIoZwa17ekKQtX95gt97v7y+ydGucECw6b4iWt5xVUypHp0iXY4c/Ck7Tua4qdhQTSUfwVfq8gFxZWRBvENAT+wyhGseYrIlyYoEQ98p4g25dDEiZ4bTcG2bdke7ixsSUxIB4i2YlAkomYf9mhf2fPA8IVrJMWWd3cfEOLaGZw3ZqOVDHOrPpPns4PlVU3jf/AyizUYJhAg7kjYpixWhgYnlBLUj3eskyjre7aZIwyRBPA2Qbh0Gz8OvPOFUTFkJiv0+8RmGtyBzCD4jEZa4U7A= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(93006095)(6041248)(20161123564025)(20161123560025)(20161123562025)(201703131423075)(201703011903075)(201702281528075)(201703061421075)(20161123555025)(6072148); SRVR:CY1PR07MB2427; BCL:0; PCL:0; RULEID:; SRVR:CY1PR07MB2427; X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2427; 4:+c50RnVnEAWPp2olhcTmSP8BidxLDjB8Q1W54fMCxjDPHy+WRbCLx/SdjDAv5CXsePe4hsaXlWS8hjlM0x9JD+/GnN3KRi11kz8mkk1dAPUy7JtHJaCb2NhlFAdvLv5SCJMPx7eL4Vyk+Cixoyvh2Fdz7P4KJg8J0/GiKCldGIqoosEMVmPDKkeoopXVCFSn4n8GKyrzBydljU1+77829rCduASxoZnJBHc11QptE7x+TQe+rD69SfGbIwz+MAVqnNe0thbjxq++UL7Y6V4RRlrMIHldf0dx7xFVH7PHi98kCQoF/FO5YqlzTbrNcbE+Cu0xibzIW9SXzi3IeL1W8Bx1DRIMGY2s17AR6hS06nf2/LvF6xusrvpnps3hRhZ0ji9Vvwy4fqrNY45T6H2h403aoPbRdERsysSU567g+A5Py5YDTK5oe9Q7A15s+w+7U1IOfx+OWowHX+eY4h35+HDl8o4zyfM/9OeG2op1mclGW+f44C9ZqlplAV+VRNTpjA3FqRCWIAQSH6/Iw4FTtaaLmj411Nz8X/WNzVro38mxc56VcPjkcn/h7tGDrjPDFB5TCxy1oE6uwp4WrmH8PYgTCol+dW0DuIPK2fBgTtpIxS6toWwcMXpz8wP2h/HYq+VPdysqPppWqma1hY0lYiDrc5/+5Ee77LuV6ISrImfS2KfBYc5+er4FmVdCMkzBlveqJo8Iu9gh92u3L9nP3PTq7bNdYjPqPiUA995SCy0= X-Forefront-PRVS: 029174C036 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(39840400002)(39400400002)(39850400002)(39450400003)(39410400002)(2906002)(6506006)(6486002)(189998001)(7736002)(575784001)(8676002)(81166006)(50226002)(3846002)(6116002)(1076002)(36756003)(6666003)(50986999)(76176999)(2950100002)(42882006)(50466002)(48376002)(6512007)(5660300001)(33646002)(53936002)(38730400002)(25786009)(107886003)(110136004)(53416004)(4326008)(47776003)(42186005)(305945005)(66066001); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR07MB2427; H:1scrb-1.caveonetworks.com; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY1PR07MB2427; 23:pnXE+K3skaYnZTvq5gAteLMFVT9gww7mYLRlWVyTR?= =?us-ascii?Q?LuX8irolo+zCBsazMcfCVaz0CeFO5dtRsJVbNJeDhXT5/SP2uU39QT1N7KFU?= =?us-ascii?Q?XiCcBWyzxijsn4IpTmnDo9IdV2iqVzQjjm/4jvRIbqw6eAcdg2JKn4LvdvcT?= =?us-ascii?Q?TwYgVVWG5O5VkEy7XDTGfF86CEpJWlkjJAwrmonR2QLZsfJFfHH3mkA9FWjN?= =?us-ascii?Q?l5nR5or2Nlz9b6UPkpMucAHOK7u96B+JXYTkvTzJcEcRV0uvOXdkkghOJBrK?= =?us-ascii?Q?2qNsaUBNC+ZbCfqL4Q7otM8oVGHPi/tFL2goQBrGvB6nAbjIyIm5f/tVYV+h?= =?us-ascii?Q?qCPoW2PqPW7hHGsyPDx2tUXWyuCAQWZ2s7KCjNgkixvNSErjnb/YXEDHH5sB?= =?us-ascii?Q?1J6oTRQTz29YEaQM2ECkkpgfSl2Vp7Oi3/pqO/1i2Xw18n97BlYSB1+6LNqq?= =?us-ascii?Q?ASr/nPmmMo2UFkGetBpkMfftXGImCeyiVTx+UIPOrg0lIkmjaGaUJ5iOlHIp?= =?us-ascii?Q?ViQMWHtq8YpOSkFzBKl5yeujU3lGjlz0PQRNe2Jfg7m5BCFOoeXaa/92MqPo?= =?us-ascii?Q?AV3K3+NKc3q1v41Fj6qjTxN3pIdlfJ/puYLk4LERFhbBPOnVcMSDOO9DHNey?= =?us-ascii?Q?mHhwDiGMEaoZH2qFi1ZWhI5ug14qtDsCCQOFsDYP8BJzW5+dyUXl1Q34DLdA?= =?us-ascii?Q?QvFQTBbxsA/8ZVaUjTMJsC8KY1oRJgaXfit6/rtRIO7dKBkAWVdVpl++0qDF?= =?us-ascii?Q?twtkezSjxGp0goY/9rI5coKHxbT0MAjN4IZXwqLFPlQzcITyd4mDJoK5qciM?= =?us-ascii?Q?3HfX0oswWHGnu4vnyTNyLYOSs+IkmcoXfwL/SeBxd/g22gCYo/f9Ds9bvEpf?= =?us-ascii?Q?RJ1JG4mCJDgc7pRu84r8nLd31MOkQ6lEQGKCr1pU9ob86L+uXjRiejTQlxQQ?= =?us-ascii?Q?zJyUA67oYsS9T7kmRJ80MdVo25OVYcdhVfBzAobhswYWYs7II/luDjfgOAS4?= =?us-ascii?Q?jA4Y3f5YoG3QAbsec1ez1hQEC/DCg+6s7mXFSuXlRl63g=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2427; 6:OVS13rusPWO9oSuJBMQPT01MLkD0nlmRBia3nkAuUs7bZgq9A6nlPfS0H5lMfhS/r6HOaPIeO1eMf9TvHjnyrFIfuJ1tHII365IkL/3AFIv/L582Jtjuo0E1sptu9aFmPOF/rGv1WjbHoUs/EvEINfwP6I+N1w/tpzy9YOX8Az2vYtzytcwrA34NQ8jYLnrzuabdR5XEslE6OD1vJH0zAmbRqp5ecUopIaNY4/ltFPVHNdmPeV8OcqGMp/HJ8SJwncjlxa31bbp2st3Tpo2NRczGXo23EkJ9Uoe9nUqezne/vq8/Rotx5rJX2jzMRCbL4Cs9wT/IYVmihHuVSTlY0OiUTPeLugEgsjlWgqOU8Ge6Kqh6k18z2k7BSbR4LvdUd2Asal4lmElrZJWO/gux2kKt0PrSo7+N9naExV22hNyC+UEye4lItAUZp8fDfxnWpTRjyMy7mv4h0of6BC9pXc5lwOz47ISynKsawM3ugS1NstPpwaT8nB/Y8VIHMWy0RE/9i0SyJv4NIcJXqXB9Jg==; 5:rRUO2REBLXPGPSXzsi/1u8r67DTF7tkrg1wGHy77qfZ3Iv9X2mA/kVlc9fOQdL9iGdnkhKCAdiAfra85EhWaEENx8YMQn6/oN7yhtnTajqmXKic1VSHsJLj/L6UtcK3uJGkLMk62q2xj/Xf5c0l6QQ==; 24:rlKAkTca8/QA7ocW4RdgM0rYuaao78Ki+PiWd2P649nEu3ST0oZZKzM+rbwTyskyywrdRylPCVuwAWYgnagNx5CoagmFuMz7pjX4EwgZG8E= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2427; 7:IabpvwlDsaEouEDmOpyb2cXdRMDopuLyhDMfb3261uGCFQUIVgTkiTTJmOZD+CFS8hJMfzhukLToj29u/puw0ajq8I01IEUnAejpcaJgnVIBoWuQcYP0hn542SqKbcEQj0LuMjA9LI5Hpy8Grb8hib5lxTI14FCTlyhu2GwMk/YccNAwPRPEgeB7qm9a62mnlIqyxfHI2B8uV4eIALGeosCsd7ShqkdQH1hQXX6F2U54+9bDiXLMpbF2NRbg8RY379VMmCPmkqdyBBdRJtDihK+q81pDECs7YlcnLoVsslvc2yuJ+fSQTpE3oKqIYM9V2Wq39CqgKA35YJSt30Ghvg== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2017 09:34:28.9072 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR07MB2427 Subject: [dpdk-dev] [PATCH v2 1/2] net: add arm64 neon version of CRC compute APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Apr 2017 09:34:32 -0000 * Added CRC compute APIs for arm64 utilizing the pmull capability * Added new file net_crc_neon.h to hold the arm64 pmull CRC implementation * Added crypto capability in compilation of generic armv8 and thunderx targets * pmull CRC version is used only after checking the pmull capability at runtime * Verified the changes with crc_autotest unit test case Signed-off-by: Ashwin Sekhar T K --- v2: * Fixed merge conflict in MAINTAINERS MAINTAINERS | 1 + lib/librte_eal/common/include/arch/arm/rte_vect.h | 45 +++ lib/librte_net/net_crc_neon.h | 357 ++++++++++++++++++++++ lib/librte_net/rte_net_crc.c | 32 +- lib/librte_net/rte_net_crc.h | 2 + mk/machine/armv8a/rte.vars.mk | 2 +- mk/machine/thunderx/rte.vars.mk | 2 +- mk/rte.cpuflags.mk | 3 + mk/toolchain/gcc/rte.toolchain-compat.mk | 1 + 9 files changed, 438 insertions(+), 7 deletions(-) create mode 100644 lib/librte_net/net_crc_neon.h diff --git a/MAINTAINERS b/MAINTAINERS index b6495d2..66d64c2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -147,6 +147,7 @@ F: lib/librte_eal/common/include/arch/arm/*_64.h F: lib/librte_acl/acl_run_neon.* F: lib/librte_lpm/rte_lpm_neon.h F: lib/librte_hash/rte*_arm64.h +F: lib/librte_net/net_crc_neon.h F: drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c F: drivers/net/i40e/i40e_rxtx_vec_neon.c F: drivers/net/virtio/virtio_rxtx_simple_neon.c diff --git a/lib/librte_eal/common/include/arch/arm/rte_vect.h b/lib/librte_eal/common/include/arch/arm/rte_vect.h index 4107c99..9a3dfdf 100644 --- a/lib/librte_eal/common/include/arch/arm/rte_vect.h +++ b/lib/librte_eal/common/include/arch/arm/rte_vect.h @@ -34,9 +34,18 @@ #define _RTE_VECT_ARM_H_ #include +#include + #include "generic/rte_vect.h" #include "arm_neon.h" +#ifdef GCC_VERSION +#undef GCC_VERSION +#endif + +#define GCC_VERSION (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 \ + + __GNUC_PATCHLEVEL__) + #ifdef __cplusplus extern "C" { #endif @@ -78,6 +87,42 @@ vqtbl1q_u8(uint8x16_t a, uint8x16_t b) } #endif +#if (GCC_VERSION < 70000) +/* + * NEON intrinsic vreinterpretq_u64_p128() is not supported + * in GCC versions < 7 + */ +static inline uint64x2_t +vreinterpretq_u64_p128(poly128_t x) +{ + return (uint64x2_t)x; +} + +/* + * NEON intrinsic vreinterpretq_p64_u64() is not supported + * in GCC versions < 7 + */ +static inline poly64x2_t +vreinterpretq_p64_u64(uint64x2_t x) +{ + return (poly64x2_t)x; +} + +/* + * NEON intrinsic vgetq_lane_p64() is not supported + * in GCC versions < 7 + */ +static inline poly64_t +vgetq_lane_p64(poly64x2_t x, const int lane) +{ + assert(lane >= 0 && lane <= 1); + + poly64_t *p = (poly64_t *)&x; + + return p[lane]; +} +#endif + #ifdef __cplusplus } #endif diff --git a/lib/librte_net/net_crc_neon.h b/lib/librte_net/net_crc_neon.h new file mode 100644 index 0000000..05120a7 --- /dev/null +++ b/lib/librte_net/net_crc_neon.h @@ -0,0 +1,357 @@ +/* + * BSD LICENSE + * + * Copyright (C) Cavium networks Ltd. 2017. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Cavium networks nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _NET_CRC_NEON_H_ +#define _NET_CRC_NEON_H_ + +#include +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** PMULL CRC computation context structure */ +struct crc_pmull_ctx { + uint64x2_t rk1_rk2; + uint64x2_t rk5_rk6; + uint64x2_t rk7_rk8; +}; + +struct crc_pmull_ctx crc32_eth_pmull __rte_aligned(16); +struct crc_pmull_ctx crc16_ccitt_pmull __rte_aligned(16); + +static inline uint8x16_t +extract_vector(uint8x16_t v0, uint8x16_t v1, const int n) +{ + switch (n) { + case 0: return vextq_u8(v0, v1, 0); + case 1: return vextq_u8(v0, v1, 1); + case 2: return vextq_u8(v0, v1, 2); + case 3: return vextq_u8(v0, v1, 3); + case 4: return vextq_u8(v0, v1, 4); + case 5: return vextq_u8(v0, v1, 5); + case 6: return vextq_u8(v0, v1, 6); + case 7: return vextq_u8(v0, v1, 7); + case 8: return vextq_u8(v0, v1, 8); + case 9: return vextq_u8(v0, v1, 9); + case 10: return vextq_u8(v0, v1, 10); + case 11: return vextq_u8(v0, v1, 11); + case 12: return vextq_u8(v0, v1, 12); + case 13: return vextq_u8(v0, v1, 13); + case 14: return vextq_u8(v0, v1, 14); + case 15: return vextq_u8(v0, v1, 15); + } + return v1; +} + +/** + * Shifts right 128 bit register by specified number of bytes + * + * @param reg 128 bit value + * @param num number of bytes to shift reg by (0-16) + * + * @return reg << (num * 8) + */ +static inline uint64x2_t +shift_bytes_right(uint64x2_t reg, const unsigned int num) +{ + /* Right Shift */ + return vreinterpretq_u64_u8(extract_vector( + vreinterpretq_u8_u64(reg), + vdupq_n_u8(0), + num)); +} + +/** + * Shifts left 128 bit register by specified number of bytes + * + * @param reg 128 bit value + * @param num number of bytes to shift reg by (0-16) + * + * @return reg << (num * 8) + */ +static inline uint64x2_t +shift_bytes_left(uint64x2_t reg, const unsigned int num) +{ + /* Left Shift */ + return vreinterpretq_u64_u8(extract_vector( + vdupq_n_u8(0), + vreinterpretq_u8_u64(reg), + 16 - num)); +} + +/** + * @brief Performs one folding round + * + * Logically function operates as follows: + * DATA = READ_NEXT_16BYTES(); + * F1 = LSB8(FOLD) + * F2 = MSB8(FOLD) + * T1 = CLMUL(F1, RK1) + * T2 = CLMUL(F2, RK2) + * FOLD = XOR(T1, T2, DATA) + * + * @param data_block 16 byte data block + * @param precomp precomputed rk1 constanst + * @param fold running 16 byte folded data + * + * @return New 16 byte folded data + */ +static inline uint64x2_t +crcr32_folding_round(uint64x2_t data_block, uint64x2_t precomp, + uint64x2_t fold) +{ + uint64x2_t tmp0 = vreinterpretq_u64_p128(vmull_p64( + vgetq_lane_p64(vreinterpretq_p64_u64(fold), 1), + vgetq_lane_p64(vreinterpretq_p64_u64(precomp), 0))); + + uint64x2_t tmp1 = vreinterpretq_u64_p128(vmull_p64( + vgetq_lane_p64(vreinterpretq_p64_u64(fold), 0), + vgetq_lane_p64(vreinterpretq_p64_u64(precomp), 1))); + + return veorq_u64(tmp1, veorq_u64(data_block, tmp0)); +} + +/** + * Performs reduction from 128 bits to 64 bits + * + * @param data128 128 bits data to be reduced + * @param precomp rk5 and rk6 precomputed constants + * + * @return data reduced to 64 bits + */ +static inline uint64x2_t +crcr32_reduce_128_to_64(uint64x2_t data128, + uint64x2_t precomp) +{ + uint64x2_t tmp0, tmp1, tmp2; + + /* 64b fold */ + tmp0 = vreinterpretq_u64_p128(vmull_p64( + vgetq_lane_p64(vreinterpretq_p64_u64(data128), 0), + vgetq_lane_p64(vreinterpretq_p64_u64(precomp), 0))); + tmp1 = shift_bytes_right(data128, 8); + tmp0 = veorq_u64(tmp0, tmp1); + + /* 32b fold */ + tmp2 = shift_bytes_left(tmp0, 4); + tmp1 = vreinterpretq_u64_p128(vmull_p64( + vgetq_lane_p64(vreinterpretq_p64_u64(tmp2), 0), + vgetq_lane_p64(vreinterpretq_p64_u64(precomp), 1))); + + return veorq_u64(tmp1, tmp0); +} + +/** + * Performs Barret's reduction from 64 bits to 32 bits + * + * @param data64 64 bits data to be reduced + * @param precomp rk7 precomputed constant + * + * @return data reduced to 32 bits + */ +static inline uint32_t +crcr32_reduce_64_to_32(uint64x2_t data64, + uint64x2_t precomp) +{ + static uint32_t mask1[4] __rte_aligned(16) = { + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000 + }; + static uint32_t mask2[4] __rte_aligned(16) = { + 0x00000000, 0xffffffff, 0xffffffff, 0xffffffff + }; + uint64x2_t tmp0, tmp1, tmp2; + + tmp0 = vandq_u64(data64, vld1q_u64((uint64_t *)mask2)); + + tmp1 = vreinterpretq_u64_p128(vmull_p64( + vgetq_lane_p64(vreinterpretq_p64_u64(tmp0), 0), + vgetq_lane_p64(vreinterpretq_p64_u64(precomp), 0))); + tmp1 = veorq_u64(tmp1, tmp0); + tmp1 = vandq_u64(tmp1, vld1q_u64((uint64_t *)mask1)); + + tmp2 = vreinterpretq_u64_p128(vmull_p64( + vgetq_lane_p64(vreinterpretq_p64_u64(tmp1), 0), + vgetq_lane_p64(vreinterpretq_p64_u64(precomp), 1))); + tmp2 = veorq_u64(tmp2, tmp1); + tmp2 = veorq_u64(tmp2, tmp0); + + return vgetq_lane_u32(vreinterpretq_u32_u64(tmp2), 2); +} + +static inline uint32_t +crc32_eth_calc_pmull( + const uint8_t *data, + uint32_t data_len, + uint32_t crc, + const struct crc_pmull_ctx *params) +{ + uint64x2_t temp, fold, k; + uint32_t n; + + /* Get CRC init value */ + temp = vreinterpretq_u64_u32(vsetq_lane_u32(crc, vmovq_n_u32(0), 0)); + + /** + * Folding all data into single 16 byte data block + * Assumes: fold holds first 16 bytes of data + */ + if (unlikely(data_len < 32)) { + if (unlikely(data_len == 16)) { + /* 16 bytes */ + fold = vld1q_u64((const uint64_t *)data); + fold = veorq_u64(fold, temp); + goto reduction_128_64; + } + + if (unlikely(data_len < 16)) { + /* 0 to 15 bytes */ + uint8_t buffer[16] __rte_aligned(16); + + memset(buffer, 0, sizeof(buffer)); + memcpy(buffer, data, data_len); + + fold = vld1q_u64((uint64_t *)buffer); + fold = veorq_u64(fold, temp); + if (unlikely(data_len < 4)) { + fold = shift_bytes_left(fold, 8 - data_len); + goto barret_reduction; + } + fold = shift_bytes_left(fold, 16 - data_len); + goto reduction_128_64; + } + /* 17 to 31 bytes */ + fold = vld1q_u64((const uint64_t *)data); + fold = veorq_u64(fold, temp); + n = 16; + k = params->rk1_rk2; + goto partial_bytes; + } + + /** At least 32 bytes in the buffer */ + /** Apply CRC initial value */ + fold = vld1q_u64((const uint64_t *)data); + fold = veorq_u64(fold, temp); + + /** Main folding loop - the last 16 bytes is processed separately */ + k = params->rk1_rk2; + for (n = 16; (n + 16) <= data_len; n += 16) { + temp = vld1q_u64((const uint64_t *)&data[n]); + fold = crcr32_folding_round(temp, k, fold); + } + +partial_bytes: + if (likely(n < data_len)) { + uint64x2_t last16, a, b, mask; + uint32_t rem = data_len & 15; + + last16 = vld1q_u64((const uint64_t *)&data[data_len - 16]); + a = shift_bytes_left(fold, 16 - rem); + b = shift_bytes_right(fold, rem); + mask = shift_bytes_left(vdupq_n_u64(-1), 16 - rem); + b = vorrq_u64(b, vandq_u64(mask, last16)); + + /* k = rk1 & rk2 */ + temp = vreinterpretq_u64_p128(vmull_p64( + vgetq_lane_p64(vreinterpretq_p64_u64(a), 1), + vgetq_lane_p64(vreinterpretq_p64_u64(k), 0))); + fold = vreinterpretq_u64_p128(vmull_p64( + vgetq_lane_p64(vreinterpretq_p64_u64(a), 0), + vgetq_lane_p64(vreinterpretq_p64_u64(k), 1))); + fold = veorq_u64(fold, temp); + fold = veorq_u64(fold, b); + } + + /** Reduction 128 -> 32 Assumes: fold holds 128bit folded data */ +reduction_128_64: + k = params->rk5_rk6; + fold = crcr32_reduce_128_to_64(fold, k); + +barret_reduction: + k = params->rk7_rk8; + n = crcr32_reduce_64_to_32(fold, k); + + return n; +} + +static inline void +rte_net_crc_neon_init(void) +{ + /* Initialize CRC16 data */ + uint64_t ccitt_k1_k2[2] = {0x189aeLLU, 0x8e10LLU}; + uint64_t ccitt_k5_k6[2] = {0x189aeLLU, 0x114aaLLU}; + uint64_t ccitt_k7_k8[2] = {0x11c581910LLU, 0x10811LLU}; + + /* Initialize CRC32 data */ + uint64_t eth_k1_k2[2] = {0xccaa009eLLU, 0x1751997d0LLU}; + uint64_t eth_k5_k6[2] = {0xccaa009eLLU, 0x163cd6124LLU}; + uint64_t eth_k7_k8[2] = {0x1f7011640LLU, 0x1db710641LLU}; + + /** Save the params in context structure */ + crc16_ccitt_pmull.rk1_rk2 = vld1q_u64(ccitt_k1_k2); + crc16_ccitt_pmull.rk5_rk6 = vld1q_u64(ccitt_k5_k6); + crc16_ccitt_pmull.rk7_rk8 = vld1q_u64(ccitt_k7_k8); + + /** Save the params in context structure */ + crc32_eth_pmull.rk1_rk2 = vld1q_u64(eth_k1_k2); + crc32_eth_pmull.rk5_rk6 = vld1q_u64(eth_k5_k6); + crc32_eth_pmull.rk7_rk8 = vld1q_u64(eth_k7_k8); +} + +static inline uint32_t +rte_crc16_ccitt_neon_handler(const uint8_t *data, + uint32_t data_len) +{ + return (uint16_t)~crc32_eth_calc_pmull(data, + data_len, + 0xffff, + &crc16_ccitt_pmull); +} + +static inline uint32_t +rte_crc32_eth_neon_handler(const uint8_t *data, + uint32_t data_len) +{ + return ~crc32_eth_calc_pmull(data, + data_len, + 0xffffffffUL, + &crc32_eth_pmull); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _NET_CRC_NEON_H_ */ diff --git a/lib/librte_net/rte_net_crc.c b/lib/librte_net/rte_net_crc.c index e8326fe..be65f34 100644 --- a/lib/librte_net/rte_net_crc.c +++ b/lib/librte_net/rte_net_crc.c @@ -43,10 +43,16 @@ && defined(RTE_MACHINE_CPUFLAG_SSE4_2) \ && defined(RTE_MACHINE_CPUFLAG_PCLMULQDQ) #define X86_64_SSE42_PCLMULQDQ 1 +#elif defined(RTE_ARCH_ARM64) +#if defined(RTE_MACHINE_CPUFLAG_PMULL) +#define ARM64_NEON_PMULL 1 +#endif #endif #ifdef X86_64_SSE42_PCLMULQDQ #include +#elif defined(ARM64_NEON_PMULL) +#include #endif /* crc tables */ @@ -74,6 +80,11 @@ static rte_net_crc_handler handlers_sse42[] = { [RTE_NET_CRC16_CCITT] = rte_crc16_ccitt_sse42_handler, [RTE_NET_CRC32_ETH] = rte_crc32_eth_sse42_handler, }; +#elif defined(ARM64_NEON_PMULL) +static rte_net_crc_handler handlers_neon[] = { + [RTE_NET_CRC16_CCITT] = rte_crc16_ccitt_neon_handler, + [RTE_NET_CRC32_ETH] = rte_crc32_eth_neon_handler, +}; #endif /** @@ -162,14 +173,20 @@ void rte_net_crc_set_alg(enum rte_net_crc_alg alg) { switch (alg) { - case RTE_NET_CRC_SSE42: #ifdef X86_64_SSE42_PCLMULQDQ + case RTE_NET_CRC_SSE42: handlers = handlers_sse42; -#else - alg = RTE_NET_CRC_SCALAR; break; +#elif defined(ARM64_NEON_PMULL) + case RTE_NET_CRC_NEON: + if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_PMULL)) { + handlers = handlers_neon; + break; + } + //-fallthrough #endif case RTE_NET_CRC_SCALAR: + //-fallthrough default: handlers = handlers_scalar; break; @@ -199,8 +216,13 @@ rte_net_crc_init(void) rte_net_crc_scalar_init(); #ifdef X86_64_SSE42_PCLMULQDQ - alg = RTE_NET_CRC_SSE42; - rte_net_crc_sse42_init(); + alg = RTE_NET_CRC_SSE42; + rte_net_crc_sse42_init(); +#elif defined(ARM64_NEON_PMULL) + if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_PMULL)) { + alg = RTE_NET_CRC_NEON; + rte_net_crc_neon_init(); + } #endif rte_net_crc_set_alg(alg); diff --git a/lib/librte_net/rte_net_crc.h b/lib/librte_net/rte_net_crc.h index 76fd129..1daed30 100644 --- a/lib/librte_net/rte_net_crc.h +++ b/lib/librte_net/rte_net_crc.h @@ -55,6 +55,7 @@ enum rte_net_crc_type { enum rte_net_crc_alg { RTE_NET_CRC_SCALAR = 0, RTE_NET_CRC_SSE42, + RTE_NET_CRC_NEON, }; /** @@ -66,6 +67,7 @@ enum rte_net_crc_alg { * This parameter is used to select the CRC implementation version. * - RTE_NET_CRC_SCALAR * - RTE_NET_CRC_SSE42 (Use 64-bit SSE4.2 intrinsic) + * - RTE_NET_CRC_NEON (Use ARM Neon intrinsic) */ void rte_net_crc_set_alg(enum rte_net_crc_alg alg); diff --git a/mk/machine/armv8a/rte.vars.mk b/mk/machine/armv8a/rte.vars.mk index d5049e1..51966a5 100644 --- a/mk/machine/armv8a/rte.vars.mk +++ b/mk/machine/armv8a/rte.vars.mk @@ -55,4 +55,4 @@ # CPU_LDFLAGS = # CPU_ASFLAGS = -MACHINE_CFLAGS += -march=armv8-a+crc +MACHINE_CFLAGS += -march=armv8-a+crc+crypto diff --git a/mk/machine/thunderx/rte.vars.mk b/mk/machine/thunderx/rte.vars.mk index ad5a379..6784105 100644 --- a/mk/machine/thunderx/rte.vars.mk +++ b/mk/machine/thunderx/rte.vars.mk @@ -55,4 +55,4 @@ # CPU_LDFLAGS = # CPU_ASFLAGS = -MACHINE_CFLAGS += -march=armv8-a+crc -mcpu=thunderx +MACHINE_CFLAGS += -march=armv8-a+crc+crypto -mcpu=thunderx diff --git a/mk/rte.cpuflags.mk b/mk/rte.cpuflags.mk index e634abc..6bbd742 100644 --- a/mk/rte.cpuflags.mk +++ b/mk/rte.cpuflags.mk @@ -119,6 +119,9 @@ ifneq ($(filter $(AUTO_CPUFLAGS),__ARM_FEATURE_CRC32),) CPUFLAGS += CRC32 endif +ifneq ($(filter $(AUTO_CPUFLAGS),__ARM_FEATURE_CRYPTO),) +CPUFLAGS += PMULL +endif MACHINE_CFLAGS += $(addprefix -DRTE_MACHINE_CPUFLAG_,$(CPUFLAGS)) diff --git a/mk/toolchain/gcc/rte.toolchain-compat.mk b/mk/toolchain/gcc/rte.toolchain-compat.mk index 280dde2..01ac7e2 100644 --- a/mk/toolchain/gcc/rte.toolchain-compat.mk +++ b/mk/toolchain/gcc/rte.toolchain-compat.mk @@ -60,6 +60,7 @@ else # ifeq ($(shell test $(GCC_VERSION) -le 49 && echo 1), 1) MACHINE_CFLAGS := $(patsubst -march=armv8-a+crc,-march=armv8-a+crc -D__ARM_FEATURE_CRC32=1,$(MACHINE_CFLAGS)) + MACHINE_CFLAGS := $(patsubst -march=armv8-a+crc+crypto,-march=armv8-a+crc+crypto -D__ARM_FEATURE_CRC32=1,$(MACHINE_CFLAGS)) endif ifeq ($(shell test $(GCC_VERSION) -le 47 && echo 1), 1) MACHINE_CFLAGS := $(patsubst -march=core-avx-i,-march=corei7-avx,$(MACHINE_CFLAGS)) -- 2.7.4