From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59D8F41DCC; Mon, 13 Mar 2023 13:03:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 44F6C406BC; Mon, 13 Mar 2023 13:03:19 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2A1E840151 for ; Mon, 13 Mar 2023 13:03:18 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32D6qdKi008665; Mon, 13 Mar 2023 05:03:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Pws0cx9jIjp+zRTgKOi68MQIwfQFfLqUEVUXn5B3P9Q=; b=PQxDUS6AIUqpUBxIoQIzSmKoLEt8X9cKVlSOfpSePBqgQ0/0BqKG2BrfQTQiQIOcRJKt WCw2BEBW52n/+b5eD7JHYHpoEdVdAE24CoKi7vy9wd75UHNySowLWEdcqBPBmMuTE0IG BuloPelIWTFZoHOoqjiT/U8pKOpnNXdQaGPQm0xq0f0hCp/pqlSCcAaBEAw/UcHX/xcy B25TDAtTp00tUHTnYxDDWIh297rNWAwRIA2D96u/umhyvo2PIPJU3V9+zObXlfSfzN70 h0t98ypQJH6BiBx8+R9NhqyfIzYZFsHklHHP8Q659CThWn6EpFTvAzhoxDECkdUeqbq6 0w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3p9xxmryq1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 13 Mar 2023 05:03:14 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 13 Mar 2023 05:03:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Mon, 13 Mar 2023 05:03:12 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 5E9313F704A; Mon, 13 Mar 2023 05:03:11 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi , Ruifeng Wang CC: , , Subject: [PATCH v2 1/1] mldev: split bfloat16 routines to separate files Date: Mon, 13 Mar 2023 05:03:06 -0700 Message-ID: <20230313120306.28911-1-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230313114342.10812-1-syalavarthi@marvell.com> References: <20230313114342.10812-1-syalavarthi@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: h-CkwariEHkLaVF1JZfIjb4sWDXUnmV7 X-Proofpoint-ORIG-GUID: h-CkwariEHkLaVF1JZfIjb4sWDXUnmV7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-13_05,2023-03-13_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Since bfloat16 intrinsics are not supported on all ARM platforms that support NEON, bfloat16 routines are moved to separate files. This would enable using scalar implementation for bfloat16 on unsupported ARM platforms. Bugzilla ID: 1179 Fixes: fc54766b1612 ("mldev: add Arm NEON type conversion") Signed-off-by: Srikanth Yalavarthi --- Depends-on: patch-120653 ("mldev: remove weak symbols use in type conversions") Depends-on: patch-125035 ("mldev: fix identical code in conditional branches") lib/mldev/meson.build | 11 +- lib/mldev/mldev_utils_neon.c | 142 +------------ lib/mldev/mldev_utils_neon_bfloat16.c | 154 ++++++++++++++ lib/mldev/mldev_utils_scalar.c | 262 +----------------------- lib/mldev/mldev_utils_scalar.h | 80 ++++++++ lib/mldev/mldev_utils_scalar_bfloat16.c | 197 ++++++++++++++++++ 6 files changed, 445 insertions(+), 401 deletions(-) create mode 100644 lib/mldev/mldev_utils_neon_bfloat16.c create mode 100644 lib/mldev/mldev_utils_scalar.h create mode 100644 lib/mldev/mldev_utils_scalar_bfloat16.c diff --git a/lib/mldev/meson.build b/lib/mldev/meson.build index c9db42257b..5769b0640a 100644 --- a/lib/mldev/meson.build +++ b/lib/mldev/meson.build @@ -7,12 +7,21 @@ sources = files( 'mldev_utils.c', ) -if dpdk_conf.has('RTE_ARCH_ARM64') +if (dpdk_conf.has('RTE_ARCH_ARM64') and + cc.get_define('__ARM_NEON', args: machine_args) != '') sources += files('mldev_utils_neon.c') else sources += files('mldev_utils_scalar.c') endif +if (dpdk_conf.has('RTE_ARCH_ARM64') and + cc.get_define('__ARM_NEON', args: machine_args) != '' and + cc.get_define('__ARM_FEATURE_BF16', args: machine_args) != '') + sources += files('mldev_utils_neon_bfloat16.c') +else + sources += files('mldev_utils_scalar_bfloat16.c') +endif + headers = files( 'rte_mldev.h', ) diff --git a/lib/mldev/mldev_utils_neon.c b/lib/mldev/mldev_utils_neon.c index 32b620db20..c7baec012b 100644 --- a/lib/mldev/mldev_utils_neon.c +++ b/lib/mldev/mldev_utils_neon.c @@ -12,8 +12,8 @@ /* Description: * This file implements vector versions of Machine Learning utility functions used to convert data - * types from higher precision to lower precision and vice-versa. Implementation is based on Arm - * Neon intrinsics. + * types from higher precision to lower precision and vice-versa, except bfloat16. Implementation + * is based on Arm Neon intrinsics. */ static inline void @@ -733,141 +733,3 @@ rte_ml_io_float16_to_float32(uint64_t nb_elements, void *input, void *output) return 0; } - -#ifdef __ARM_FEATURE_BF16 - -static inline void -__float32_to_bfloat16_neon_f16x4(float32_t *input, bfloat16_t *output) -{ - float32x4_t f32x4; - bfloat16x4_t bf16x4; - - /* load 4 x float32_t elements */ - f32x4 = vld1q_f32(input); - - /* convert float32x4_t to bfloat16x4_t */ - bf16x4 = vcvt_bf16_f32(f32x4); - - /* store bfloat16x4_t */ - vst1_bf16(output, bf16x4); -} - -static inline void -__float32_to_bfloat16_neon_f16x1(float32_t *input, bfloat16_t *output) -{ - float32x4_t f32x4; - bfloat16x4_t bf16x4; - - /* load element to 4 lanes */ - f32x4 = vld1q_dup_f32(input); - - /* convert float32_t to bfloat16_t */ - bf16x4 = vcvt_bf16_f32(f32x4); - - /* store lane 0 / 1 element */ - vst1_lane_bf16(output, bf16x4, 0); -} - -int -rte_ml_io_float32_to_bfloat16(uint64_t nb_elements, void *input, void *output) -{ - float32_t *input_buffer; - bfloat16_t *output_buffer; - uint64_t nb_iterations; - uint32_t vlen; - uint64_t i; - - if ((nb_elements == 0) || (input == NULL) || (output == NULL)) - return -EINVAL; - - input_buffer = (float32_t *)input; - output_buffer = (bfloat16_t *)output; - vlen = 2 * sizeof(float32_t) / sizeof(bfloat16_t); - nb_iterations = nb_elements / vlen; - - /* convert vlen elements in each iteration */ - for (i = 0; i < nb_iterations; i++) { - __float32_to_bfloat16_neon_f16x4(input_buffer, output_buffer); - input_buffer += vlen; - output_buffer += vlen; - } - - /* convert leftover elements */ - i = i * vlen; - for (; i < nb_elements; i++) { - __float32_to_bfloat16_neon_f16x1(input_buffer, output_buffer); - input_buffer++; - output_buffer++; - } - - return 0; -} - -static inline void -__bfloat16_to_float32_neon_f32x4(bfloat16_t *input, float32_t *output) -{ - bfloat16x4_t bf16x4; - float32x4_t f32x4; - - /* load 4 x bfloat16_t elements */ - bf16x4 = vld1_bf16(input); - - /* convert bfloat16x4_t to float32x4_t */ - f32x4 = vcvt_f32_bf16(bf16x4); - - /* store float32x4_t */ - vst1q_f32(output, f32x4); -} - -static inline void -__bfloat16_to_float32_neon_f32x1(bfloat16_t *input, float32_t *output) -{ - bfloat16x4_t bf16x4; - float32x4_t f32x4; - - /* load element to 4 lanes */ - bf16x4 = vld1_dup_bf16(input); - - /* convert bfloat16_t to float32_t */ - f32x4 = vcvt_f32_bf16(bf16x4); - - /* store lane 0 / 1 element */ - vst1q_lane_f32(output, f32x4, 0); -} - -int -rte_ml_io_bfloat16_to_float32(uint64_t nb_elements, void *input, void *output) -{ - bfloat16_t *input_buffer; - float32_t *output_buffer; - uint64_t nb_iterations; - uint32_t vlen; - uint64_t i; - - if ((nb_elements == 0) || (input == NULL) || (output == NULL)) - return -EINVAL; - - input_buffer = (bfloat16_t *)input; - output_buffer = (float32_t *)output; - vlen = 2 * sizeof(float32_t) / sizeof(bfloat16_t); - nb_iterations = nb_elements / vlen; - - /* convert vlen elements in each iteration */ - for (i = 0; i < nb_iterations; i++) { - __bfloat16_to_float32_neon_f32x4(input_buffer, output_buffer); - input_buffer += vlen; - output_buffer += vlen; - } - - /* convert leftover elements */ - i = i * vlen; - for (; i < nb_elements; i++) { - __bfloat16_to_float32_neon_f32x1(input_buffer, output_buffer); - input_buffer++; - output_buffer++; - } - - return 0; -} - -#endif /* __ARM_FEATURE_BF16 */ diff --git a/lib/mldev/mldev_utils_neon_bfloat16.c b/lib/mldev/mldev_utils_neon_bfloat16.c new file mode 100644 index 0000000000..8dec3fd834 --- /dev/null +++ b/lib/mldev/mldev_utils_neon_bfloat16.c @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include +#include +#include + +#include "mldev_utils.h" + +#include + +/* Description: + * This file implements vector versions of Machine Learning utility functions used to convert data + * types from bfloat16 to float and vice-versa. Implementation is based on Arm Neon intrinsics. + */ + +#ifdef __ARM_FEATURE_BF16 + +static inline void +__float32_to_bfloat16_neon_f16x4(float32_t *input, bfloat16_t *output) +{ + float32x4_t f32x4; + bfloat16x4_t bf16x4; + + /* load 4 x float32_t elements */ + f32x4 = vld1q_f32(input); + + /* convert float32x4_t to bfloat16x4_t */ + bf16x4 = vcvt_bf16_f32(f32x4); + + /* store bfloat16x4_t */ + vst1_bf16(output, bf16x4); +} + +static inline void +__float32_to_bfloat16_neon_f16x1(float32_t *input, bfloat16_t *output) +{ + float32x4_t f32x4; + bfloat16x4_t bf16x4; + + /* load element to 4 lanes */ + f32x4 = vld1q_dup_f32(input); + + /* convert float32_t to bfloat16_t */ + bf16x4 = vcvt_bf16_f32(f32x4); + + /* store lane 0 / 1 element */ + vst1_lane_bf16(output, bf16x4, 0); +} + +int +rte_ml_io_float32_to_bfloat16(uint64_t nb_elements, void *input, void *output) +{ + float32_t *input_buffer; + bfloat16_t *output_buffer; + uint64_t nb_iterations; + uint32_t vlen; + uint64_t i; + + if ((nb_elements == 0) || (input == NULL) || (output == NULL)) + return -EINVAL; + + input_buffer = (float32_t *)input; + output_buffer = (bfloat16_t *)output; + vlen = 2 * sizeof(float32_t) / sizeof(bfloat16_t); + nb_iterations = nb_elements / vlen; + + /* convert vlen elements in each iteration */ + for (i = 0; i < nb_iterations; i++) { + __float32_to_bfloat16_neon_f16x4(input_buffer, output_buffer); + input_buffer += vlen; + output_buffer += vlen; + } + + /* convert leftover elements */ + i = i * vlen; + for (; i < nb_elements; i++) { + __float32_to_bfloat16_neon_f16x1(input_buffer, output_buffer); + input_buffer++; + output_buffer++; + } + + return 0; +} + +static inline void +__bfloat16_to_float32_neon_f32x4(bfloat16_t *input, float32_t *output) +{ + bfloat16x4_t bf16x4; + float32x4_t f32x4; + + /* load 4 x bfloat16_t elements */ + bf16x4 = vld1_bf16(input); + + /* convert bfloat16x4_t to float32x4_t */ + f32x4 = vcvt_f32_bf16(bf16x4); + + /* store float32x4_t */ + vst1q_f32(output, f32x4); +} + +static inline void +__bfloat16_to_float32_neon_f32x1(bfloat16_t *input, float32_t *output) +{ + bfloat16x4_t bf16x4; + float32x4_t f32x4; + + /* load element to 4 lanes */ + bf16x4 = vld1_dup_bf16(input); + + /* convert bfloat16_t to float32_t */ + f32x4 = vcvt_f32_bf16(bf16x4); + + /* store lane 0 / 1 element */ + vst1q_lane_f32(output, f32x4, 0); +} + +int +rte_ml_io_bfloat16_to_float32(uint64_t nb_elements, void *input, void *output) +{ + bfloat16_t *input_buffer; + float32_t *output_buffer; + uint64_t nb_iterations; + uint32_t vlen; + uint64_t i; + + if ((nb_elements == 0) || (input == NULL) || (output == NULL)) + return -EINVAL; + + input_buffer = (bfloat16_t *)input; + output_buffer = (float32_t *)output; + vlen = 2 * sizeof(float32_t) / sizeof(bfloat16_t); + nb_iterations = nb_elements / vlen; + + /* convert vlen elements in each iteration */ + for (i = 0; i < nb_iterations; i++) { + __bfloat16_to_float32_neon_f32x4(input_buffer, output_buffer); + input_buffer += vlen; + output_buffer += vlen; + } + + /* convert leftover elements */ + i = i * vlen; + for (; i < nb_elements; i++) { + __bfloat16_to_float32_neon_f32x1(input_buffer, output_buffer); + input_buffer++; + output_buffer++; + } + + return 0; +} + +#endif /* __ARM_FEATURE_BF16 */ diff --git a/lib/mldev/mldev_utils_scalar.c b/lib/mldev/mldev_utils_scalar.c index b76a8fb326..92be5daee8 100644 --- a/lib/mldev/mldev_utils_scalar.c +++ b/lib/mldev/mldev_utils_scalar.c @@ -2,88 +2,13 @@ * Copyright (c) 2022 Marvell. */ -#include -#include -#include - -#include "mldev_utils.h" +#include "mldev_utils_scalar.h" /* Description: * This file implements scalar versions of Machine Learning utility functions used to convert data - * types from higher precision to lower precision and vice-versa. + * types from higher precision to lower precision and vice-versa, except bfloat16. */ -#ifndef BIT -#define BIT(nr) (1UL << (nr)) -#endif - -#ifndef BITS_PER_LONG -#define BITS_PER_LONG (__SIZEOF_LONG__ * 8) -#endif - -#ifndef GENMASK_U32 -#define GENMASK_U32(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) -#endif - -/* float32: bit index of MSB & LSB of sign, exponent and mantissa */ -#define FP32_LSB_M 0 -#define FP32_MSB_M 22 -#define FP32_LSB_E 23 -#define FP32_MSB_E 30 -#define FP32_LSB_S 31 -#define FP32_MSB_S 31 - -/* float32: bitmask for sign, exponent and mantissa */ -#define FP32_MASK_S GENMASK_U32(FP32_MSB_S, FP32_LSB_S) -#define FP32_MASK_E GENMASK_U32(FP32_MSB_E, FP32_LSB_E) -#define FP32_MASK_M GENMASK_U32(FP32_MSB_M, FP32_LSB_M) - -/* float16: bit index of MSB & LSB of sign, exponent and mantissa */ -#define FP16_LSB_M 0 -#define FP16_MSB_M 9 -#define FP16_LSB_E 10 -#define FP16_MSB_E 14 -#define FP16_LSB_S 15 -#define FP16_MSB_S 15 - -/* float16: bitmask for sign, exponent and mantissa */ -#define FP16_MASK_S GENMASK_U32(FP16_MSB_S, FP16_LSB_S) -#define FP16_MASK_E GENMASK_U32(FP16_MSB_E, FP16_LSB_E) -#define FP16_MASK_M GENMASK_U32(FP16_MSB_M, FP16_LSB_M) - -/* bfloat16: bit index of MSB & LSB of sign, exponent and mantissa */ -#define BF16_LSB_M 0 -#define BF16_MSB_M 6 -#define BF16_LSB_E 7 -#define BF16_MSB_E 14 -#define BF16_LSB_S 15 -#define BF16_MSB_S 15 - -/* bfloat16: bitmask for sign, exponent and mantissa */ -#define BF16_MASK_S GENMASK_U32(BF16_MSB_S, BF16_LSB_S) -#define BF16_MASK_E GENMASK_U32(BF16_MSB_E, BF16_LSB_E) -#define BF16_MASK_M GENMASK_U32(BF16_MSB_M, BF16_LSB_M) - -/* Exponent bias */ -#define FP32_BIAS_E 127 -#define FP16_BIAS_E 15 -#define BF16_BIAS_E 127 - -#define FP32_PACK(sign, exponent, mantissa) \ - (((sign) << FP32_LSB_S) | ((exponent) << FP32_LSB_E) | (mantissa)) - -#define FP16_PACK(sign, exponent, mantissa) \ - (((sign) << FP16_LSB_S) | ((exponent) << FP16_LSB_E) | (mantissa)) - -#define BF16_PACK(sign, exponent, mantissa) \ - (((sign) << BF16_LSB_S) | ((exponent) << BF16_LSB_E) | (mantissa)) - -/* Represent float32 as float and uint32_t */ -union float32 { - float f; - uint32_t u; -}; - int rte_ml_io_float32_to_int8(float scale, uint64_t nb_elements, void *input, void *output) { @@ -532,186 +457,3 @@ rte_ml_io_float16_to_float32(uint64_t nb_elements, void *input, void *output) return 0; } - -/* Convert a single precision floating point number (float32) into a - * brain float number (bfloat16) using round to nearest rounding mode. - */ -static uint16_t -__float32_to_bfloat16_scalar_rtn(float x) -{ - union float32 f32; /* float32 input */ - uint32_t f32_s; /* float32 sign */ - uint32_t f32_e; /* float32 exponent */ - uint32_t f32_m; /* float32 mantissa */ - uint16_t b16_s; /* float16 sign */ - uint16_t b16_e; /* float16 exponent */ - uint16_t b16_m; /* float16 mantissa */ - uint32_t tbits; /* number of truncated bits */ - uint16_t u16; /* float16 output */ - - f32.f = x; - f32_s = (f32.u & FP32_MASK_S) >> FP32_LSB_S; - f32_e = (f32.u & FP32_MASK_E) >> FP32_LSB_E; - f32_m = (f32.u & FP32_MASK_M) >> FP32_LSB_M; - - b16_s = f32_s; - b16_e = 0; - b16_m = 0; - - switch (f32_e) { - case (0): /* float32: zero or subnormal number */ - b16_e = 0; - if (f32_m == 0) /* zero */ - b16_m = 0; - else /* subnormal float32 number, normal bfloat16 */ - goto bf16_normal; - break; - case (FP32_MASK_E >> FP32_LSB_E): /* float32: infinity or nan */ - b16_e = BF16_MASK_E >> BF16_LSB_E; - if (f32_m == 0) { /* infinity */ - b16_m = 0; - } else { /* nan, propagate mantissa and set MSB of mantissa to 1 */ - b16_m = f32_m >> (FP32_MSB_M - BF16_MSB_M); - b16_m |= BIT(BF16_MSB_M); - } - break; - default: /* float32: normal number, normal bfloat16 */ - goto bf16_normal; - } - - goto bf16_pack; - -bf16_normal: - b16_e = f32_e; - tbits = FP32_MSB_M - BF16_MSB_M; - b16_m = f32_m >> tbits; - - /* if non-leading truncated bits are set */ - if ((f32_m & GENMASK_U32(tbits - 1, 0)) > BIT(tbits - 1)) { - b16_m++; - - /* if overflow into exponent */ - if (((b16_m & BF16_MASK_E) >> BF16_LSB_E) == 0x1) - b16_e++; - } else if ((f32_m & GENMASK_U32(tbits - 1, 0)) == BIT(tbits - 1)) { - /* if only leading truncated bit is set */ - if ((b16_m & 0x1) == 0x1) { - b16_m++; - - /* if overflow into exponent */ - if (((b16_m & BF16_MASK_E) >> BF16_LSB_E) == 0x1) - b16_e++; - } - } - b16_m = b16_m & BF16_MASK_M; - -bf16_pack: - u16 = BF16_PACK(b16_s, b16_e, b16_m); - - return u16; -} - -int -rte_ml_io_float32_to_bfloat16(uint64_t nb_elements, void *input, void *output) -{ - float *input_buffer; - uint16_t *output_buffer; - uint64_t i; - - if ((nb_elements == 0) || (input == NULL) || (output == NULL)) - return -EINVAL; - - input_buffer = (float *)input; - output_buffer = (uint16_t *)output; - - for (i = 0; i < nb_elements; i++) { - *output_buffer = __float32_to_bfloat16_scalar_rtn(*input_buffer); - - input_buffer = input_buffer + 1; - output_buffer = output_buffer + 1; - } - - return 0; -} - -/* Convert a brain float number (bfloat16) into a - * single precision floating point number (float32). - */ -static float -__bfloat16_to_float32_scalar_rtx(uint16_t f16) -{ - union float32 f32; /* float32 output */ - uint16_t b16_s; /* float16 sign */ - uint16_t b16_e; /* float16 exponent */ - uint16_t b16_m; /* float16 mantissa */ - uint32_t f32_s; /* float32 sign */ - uint32_t f32_e; /* float32 exponent */ - uint32_t f32_m; /* float32 mantissa*/ - uint8_t shift; /* number of bits to be shifted */ - - b16_s = (f16 & BF16_MASK_S) >> BF16_LSB_S; - b16_e = (f16 & BF16_MASK_E) >> BF16_LSB_E; - b16_m = (f16 & BF16_MASK_M) >> BF16_LSB_M; - - f32_s = b16_s; - switch (b16_e) { - case (BF16_MASK_E >> BF16_LSB_E): /* bfloat16: infinity or nan */ - f32_e = FP32_MASK_E >> FP32_LSB_E; - if (b16_m == 0x0) { /* infinity */ - f32_m = 0; - } else { /* nan, propagate mantissa, set MSB of mantissa to 1 */ - f32_m = b16_m; - shift = FP32_MSB_M - BF16_MSB_M; - f32_m = (f32_m << shift) & FP32_MASK_M; - f32_m |= BIT(FP32_MSB_M); - } - break; - case 0: /* bfloat16: zero or subnormal */ - f32_m = b16_m; - if (b16_m == 0) { /* zero signed */ - f32_e = 0; - } else { /* subnormal numbers */ - goto fp32_normal; - } - break; - default: /* bfloat16: normal number */ - goto fp32_normal; - } - - goto fp32_pack; - -fp32_normal: - f32_m = b16_m; - f32_e = FP32_BIAS_E + b16_e - BF16_BIAS_E; - - shift = (FP32_MSB_M - BF16_MSB_M); - f32_m = (f32_m << shift) & FP32_MASK_M; - -fp32_pack: - f32.u = FP32_PACK(f32_s, f32_e, f32_m); - - return f32.f; -} - -int -rte_ml_io_bfloat16_to_float32(uint64_t nb_elements, void *input, void *output) -{ - uint16_t *input_buffer; - float *output_buffer; - uint64_t i; - - if ((nb_elements == 0) || (input == NULL) || (output == NULL)) - return -EINVAL; - - input_buffer = (uint16_t *)input; - output_buffer = (float *)output; - - for (i = 0; i < nb_elements; i++) { - *output_buffer = __bfloat16_to_float32_scalar_rtx(*input_buffer); - - input_buffer = input_buffer + 1; - output_buffer = output_buffer + 1; - } - - return 0; -} diff --git a/lib/mldev/mldev_utils_scalar.h b/lib/mldev/mldev_utils_scalar.h new file mode 100644 index 0000000000..57e66ddb60 --- /dev/null +++ b/lib/mldev/mldev_utils_scalar.h @@ -0,0 +1,80 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include +#include +#include + +#include "mldev_utils.h" + +#ifndef BIT +#define BIT(nr) (1UL << (nr)) +#endif + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif + +#ifndef GENMASK_U32 +#define GENMASK_U32(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) +#endif + +/* float32: bit index of MSB & LSB of sign, exponent and mantissa */ +#define FP32_LSB_M 0 +#define FP32_MSB_M 22 +#define FP32_LSB_E 23 +#define FP32_MSB_E 30 +#define FP32_LSB_S 31 +#define FP32_MSB_S 31 + +/* float32: bitmask for sign, exponent and mantissa */ +#define FP32_MASK_S GENMASK_U32(FP32_MSB_S, FP32_LSB_S) +#define FP32_MASK_E GENMASK_U32(FP32_MSB_E, FP32_LSB_E) +#define FP32_MASK_M GENMASK_U32(FP32_MSB_M, FP32_LSB_M) + +/* float16: bit index of MSB & LSB of sign, exponent and mantissa */ +#define FP16_LSB_M 0 +#define FP16_MSB_M 9 +#define FP16_LSB_E 10 +#define FP16_MSB_E 14 +#define FP16_LSB_S 15 +#define FP16_MSB_S 15 + +/* float16: bitmask for sign, exponent and mantissa */ +#define FP16_MASK_S GENMASK_U32(FP16_MSB_S, FP16_LSB_S) +#define FP16_MASK_E GENMASK_U32(FP16_MSB_E, FP16_LSB_E) +#define FP16_MASK_M GENMASK_U32(FP16_MSB_M, FP16_LSB_M) + +/* bfloat16: bit index of MSB & LSB of sign, exponent and mantissa */ +#define BF16_LSB_M 0 +#define BF16_MSB_M 6 +#define BF16_LSB_E 7 +#define BF16_MSB_E 14 +#define BF16_LSB_S 15 +#define BF16_MSB_S 15 + +/* bfloat16: bitmask for sign, exponent and mantissa */ +#define BF16_MASK_S GENMASK_U32(BF16_MSB_S, BF16_LSB_S) +#define BF16_MASK_E GENMASK_U32(BF16_MSB_E, BF16_LSB_E) +#define BF16_MASK_M GENMASK_U32(BF16_MSB_M, BF16_LSB_M) + +/* Exponent bias */ +#define FP32_BIAS_E 127 +#define FP16_BIAS_E 15 +#define BF16_BIAS_E 127 + +#define FP32_PACK(sign, exponent, mantissa) \ + (((sign) << FP32_LSB_S) | ((exponent) << FP32_LSB_E) | (mantissa)) + +#define FP16_PACK(sign, exponent, mantissa) \ + (((sign) << FP16_LSB_S) | ((exponent) << FP16_LSB_E) | (mantissa)) + +#define BF16_PACK(sign, exponent, mantissa) \ + (((sign) << BF16_LSB_S) | ((exponent) << BF16_LSB_E) | (mantissa)) + +/* Represent float32 as float and uint32_t */ +union float32 { + float f; + uint32_t u; +}; diff --git a/lib/mldev/mldev_utils_scalar_bfloat16.c b/lib/mldev/mldev_utils_scalar_bfloat16.c new file mode 100644 index 0000000000..1437416313 --- /dev/null +++ b/lib/mldev/mldev_utils_scalar_bfloat16.c @@ -0,0 +1,197 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include +#include +#include + +#include "mldev_utils_scalar.h" + +/* Description: + * This file implements scalar versions of Machine Learning utility functions used to convert data + * types from bfloat16 to float32 and vice-versa. + */ + +/* Convert a single precision floating point number (float32) into a + * brain float number (bfloat16) using round to nearest rounding mode. + */ +static uint16_t +__float32_to_bfloat16_scalar_rtn(float x) +{ + union float32 f32; /* float32 input */ + uint32_t f32_s; /* float32 sign */ + uint32_t f32_e; /* float32 exponent */ + uint32_t f32_m; /* float32 mantissa */ + uint16_t b16_s; /* float16 sign */ + uint16_t b16_e; /* float16 exponent */ + uint16_t b16_m; /* float16 mantissa */ + uint32_t tbits; /* number of truncated bits */ + uint16_t u16; /* float16 output */ + + f32.f = x; + f32_s = (f32.u & FP32_MASK_S) >> FP32_LSB_S; + f32_e = (f32.u & FP32_MASK_E) >> FP32_LSB_E; + f32_m = (f32.u & FP32_MASK_M) >> FP32_LSB_M; + + b16_s = f32_s; + b16_e = 0; + b16_m = 0; + + switch (f32_e) { + case (0): /* float32: zero or subnormal number */ + b16_e = 0; + if (f32_m == 0) /* zero */ + b16_m = 0; + else /* subnormal float32 number, normal bfloat16 */ + goto bf16_normal; + break; + case (FP32_MASK_E >> FP32_LSB_E): /* float32: infinity or nan */ + b16_e = BF16_MASK_E >> BF16_LSB_E; + if (f32_m == 0) { /* infinity */ + b16_m = 0; + } else { /* nan, propagate mantissa and set MSB of mantissa to 1 */ + b16_m = f32_m >> (FP32_MSB_M - BF16_MSB_M); + b16_m |= BIT(BF16_MSB_M); + } + break; + default: /* float32: normal number, normal bfloat16 */ + goto bf16_normal; + } + + goto bf16_pack; + +bf16_normal: + b16_e = f32_e; + tbits = FP32_MSB_M - BF16_MSB_M; + b16_m = f32_m >> tbits; + + /* if non-leading truncated bits are set */ + if ((f32_m & GENMASK_U32(tbits - 1, 0)) > BIT(tbits - 1)) { + b16_m++; + + /* if overflow into exponent */ + if (((b16_m & BF16_MASK_E) >> BF16_LSB_E) == 0x1) + b16_e++; + } else if ((f32_m & GENMASK_U32(tbits - 1, 0)) == BIT(tbits - 1)) { + /* if only leading truncated bit is set */ + if ((b16_m & 0x1) == 0x1) { + b16_m++; + + /* if overflow into exponent */ + if (((b16_m & BF16_MASK_E) >> BF16_LSB_E) == 0x1) + b16_e++; + } + } + b16_m = b16_m & BF16_MASK_M; + +bf16_pack: + u16 = BF16_PACK(b16_s, b16_e, b16_m); + + return u16; +} + +int +rte_ml_io_float32_to_bfloat16(uint64_t nb_elements, void *input, void *output) +{ + float *input_buffer; + uint16_t *output_buffer; + uint64_t i; + + if ((nb_elements == 0) || (input == NULL) || (output == NULL)) + return -EINVAL; + + input_buffer = (float *)input; + output_buffer = (uint16_t *)output; + + for (i = 0; i < nb_elements; i++) { + *output_buffer = __float32_to_bfloat16_scalar_rtn(*input_buffer); + + input_buffer = input_buffer + 1; + output_buffer = output_buffer + 1; + } + + return 0; +} + +/* Convert a brain float number (bfloat16) into a + * single precision floating point number (float32). + */ +static float +__bfloat16_to_float32_scalar_rtx(uint16_t f16) +{ + union float32 f32; /* float32 output */ + uint16_t b16_s; /* float16 sign */ + uint16_t b16_e; /* float16 exponent */ + uint16_t b16_m; /* float16 mantissa */ + uint32_t f32_s; /* float32 sign */ + uint32_t f32_e; /* float32 exponent */ + uint32_t f32_m; /* float32 mantissa*/ + uint8_t shift; /* number of bits to be shifted */ + + b16_s = (f16 & BF16_MASK_S) >> BF16_LSB_S; + b16_e = (f16 & BF16_MASK_E) >> BF16_LSB_E; + b16_m = (f16 & BF16_MASK_M) >> BF16_LSB_M; + + f32_s = b16_s; + switch (b16_e) { + case (BF16_MASK_E >> BF16_LSB_E): /* bfloat16: infinity or nan */ + f32_e = FP32_MASK_E >> FP32_LSB_E; + if (b16_m == 0x0) { /* infinity */ + f32_m = 0; + } else { /* nan, propagate mantissa, set MSB of mantissa to 1 */ + f32_m = b16_m; + shift = FP32_MSB_M - BF16_MSB_M; + f32_m = (f32_m << shift) & FP32_MASK_M; + f32_m |= BIT(FP32_MSB_M); + } + break; + case 0: /* bfloat16: zero or subnormal */ + f32_m = b16_m; + if (b16_m == 0) { /* zero signed */ + f32_e = 0; + } else { /* subnormal numbers */ + goto fp32_normal; + } + break; + default: /* bfloat16: normal number */ + goto fp32_normal; + } + + goto fp32_pack; + +fp32_normal: + f32_m = b16_m; + f32_e = FP32_BIAS_E + b16_e - BF16_BIAS_E; + + shift = (FP32_MSB_M - BF16_MSB_M); + f32_m = (f32_m << shift) & FP32_MASK_M; + +fp32_pack: + f32.u = FP32_PACK(f32_s, f32_e, f32_m); + + return f32.f; +} + +int +rte_ml_io_bfloat16_to_float32(uint64_t nb_elements, void *input, void *output) +{ + uint16_t *input_buffer; + float *output_buffer; + uint64_t i; + + if ((nb_elements == 0) || (input == NULL) || (output == NULL)) + return -EINVAL; + + input_buffer = (uint16_t *)input; + output_buffer = (float *)output; + + for (i = 0; i < nb_elements; i++) { + *output_buffer = __bfloat16_to_float32_scalar_rtx(*input_buffer); + + input_buffer = input_buffer + 1; + output_buffer = output_buffer + 1; + } + + return 0; +} -- 2.17.1