From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52601A0544; Thu, 25 Aug 2022 15:56:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DBF264280C; Thu, 25 Aug 2022 15:56:38 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id C9FA6400EF for ; Thu, 25 Aug 2022 15:56:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661435796; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=umyG+yBcRQ81EAlevmbZ4Wv93lkgSlBozXRfPc0BbFA=; b=GiivGUd90BgQFM3F64GDqkDoP9jV/HuC3BAkXXRdr7ckagvb+Gx56yPMWWhGg5+SKJsBsl qHeA4ZW6oYkl78ePEaEbhhnemSqqM5I9zNuTxlZuQw6epB5VUWrHY56vMcunZPAOkRpwd3 5MfRFIH5jwrJL79akozK17+YQFSAtuY= Received: from mail-lf1-f69.google.com (mail-lf1-f69.google.com [209.85.167.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-50-rQUy6j_pM5us_KQiNB7o5w-1; Thu, 25 Aug 2022 09:56:35 -0400 X-MC-Unique: rQUy6j_pM5us_KQiNB7o5w-1 Received: by mail-lf1-f69.google.com with SMTP id g19-20020a056512119300b00492d83ae1d5so4503352lfr.0 for ; Thu, 25 Aug 2022 06:56:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=umyG+yBcRQ81EAlevmbZ4Wv93lkgSlBozXRfPc0BbFA=; b=W/N8puzPDx7ZMmOqefBFfpgKfRI9os0cDomI0qFgqcLiRTKhju0C+uFZA0SW+n69av tXLaF1rucPc1+x0tjJ4CsXKqfChcj77NSrIGwseLZDWHvQCvtEjRDdCTw6f6PYmSgOfa Okf+KRQS1r5TNIPZjKyh11VGyr0UwwGIC7higIsktEUoxsB9FynxTnRHn4V+sEzqy9L0 +Sx80fFtGqP97WOpqhfzv+18clS/Y8Q9QzUS61BgT5FjAup3ltFB+vBs1AVh5Pu8OP46 7M3gHvrLdXA/J+ZKa3ETZe4SIZ55AoV7a+WMQ/pZMQ8Xs453jzcHDJycGkpRL0/5im8k aFPw== X-Gm-Message-State: ACgBeo1ObrbkClrA7NPgjiLGEQjNv88Jo1YrsAQCroEh6gBZ3sFqCgI3 dGnu6jFdWssBcBQ20knAq//Txaxtkkm+1j/KqFxNxU5dX93I3gsL8S1/IZHLKgboRcOo94S+hIQ nqJHlxHEzUmzfa9CZyxM= X-Received: by 2002:a05:651c:19a8:b0:261:e043:3960 with SMTP id bx40-20020a05651c19a800b00261e0433960mr1238036ljb.81.1661435793524; Thu, 25 Aug 2022 06:56:33 -0700 (PDT) X-Google-Smtp-Source: AA6agR7dtvaLAbc5OzYFptqf+loZu43JwSswpHc2Of67iBFXvrvXLw8nur/yHcwHCVEpXfRv0xcfqsvlMmQzzPiYb6I= X-Received: by 2002:a05:651c:19a8:b0:261:e043:3960 with SMTP id bx40-20020a05651c19a800b00261e0433960mr1238019ljb.81.1661435793202; Thu, 25 Aug 2022 06:56:33 -0700 (PDT) MIME-Version: 1.0 References: <20220824083123.583704-1-zhoumin@loongson.cn> <20220824083123.583704-2-zhoumin@loongson.cn> In-Reply-To: <20220824083123.583704-2-zhoumin@loongson.cn> From: David Marchand Date: Thu, 25 Aug 2022 15:56:22 +0200 Message-ID: Subject: Re: [PATCH v5 1/7] eal/loongarch: support LoongArch architecture To: Min Zhou Cc: Thomas Monjalon , Bruce Richardson , "Burakov, Anatoly" , Qiming Yang , Yuying Zhang , Jakub Grajciar , Konstantin Ananyev , dev , maobibo@loongson.cn X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This is only a first pass. On Wed, Aug 24, 2022 at 10:31 AM Min Zhou wrote: > diff --git a/MAINTAINERS b/MAINTAINERS > index 32ffdd1a61..f00b82b5ce 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -311,6 +311,12 @@ F: config/riscv/ > F: doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst > F: lib/eal/riscv/ > > +LoongArch > +M: Min Zhou > +F: config/loongarch/ > +F: doc/guides/linux_gsg/cross_build_dpdk_for_loongarch.rst > +F: lib/eal/loongarch/ > + I tried to put entries in MAINTAINERS in a pseudo alphabetical order (ignoring the vendor name). We currently have: ARM, Power, RISC-V, X86. As a consequence, the block for LoongArch should be moved between ARM, and Power arches. > Intel x86 > M: Bruce Richardson > M: Konstantin Ananyev [snip] > diff --git a/config/loongarch/meson.build b/config/loongarch/meson.build > new file mode 100644 > index 0000000000..e052fbad7f > --- /dev/null > +++ b/config/loongarch/meson.build > @@ -0,0 +1,43 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2022 Loongson Technology Corporation Limited > + > +if not dpdk_conf.get('RTE_ARCH_64') > + error('Only 64-bit compiles are supported for this platform type') > +endif > +dpdk_conf.set('RTE_ARCH', 'loongarch') > +dpdk_conf.set('RTE_ARCH_LOONGARCH', 1) > +dpdk_conf.set('RTE_ARCH_NO_VECTOR', 1) ? RTE_ARCH_NO_VECTOR is not used anywhere, please remove. > + > +machine_args_generic = [ > + ['default', ['-march=loongarch64']], > +] > + > +flags_generic = [ > + ['RTE_MACHINE', '"loongarch64"'], > + ['RTE_MAX_LCORE', 64], > + ['RTE_MAX_NUMA_NODES', 16], > + ['RTE_CACHE_LINE_SIZE', 64]] > + > +impl_generic = ['Generic loongarch', flags_generic, machine_args_generic] > + > +machine = [] > +machine_args = [] > + > +machine = impl_generic > +impl_pn = 'default' > + > +message('Implementer : ' + machine[0]) > +foreach flag: machine[1] > + if flag.length() > 0 > + dpdk_conf.set(flag[0], flag[1]) > + endif > +endforeach > + > +foreach marg: machine[2] > + if marg[0] == impl_pn > + foreach f: marg[1] > + machine_args += f > + endforeach > + endif > +endforeach > +message(machine_args) [snip] > diff --git a/doc/guides/linux_gsg/index.rst b/doc/guides/linux_gsg/index.rst > index 747552c385..c34966b241 100644 > --- a/doc/guides/linux_gsg/index.rst > +++ b/doc/guides/linux_gsg/index.rst > @@ -15,6 +15,7 @@ Getting Started Guide for Linux > build_dpdk > cross_build_dpdk_for_arm64 > cross_build_dpdk_for_riscv > + cross_build_dpdk_for_loongarch In alphabetical order please. > linux_drivers > build_sample_apps > linux_eal_parameters > diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst > index 7f6cb914a5..8afa7ef7fd 100644 > --- a/doc/guides/nics/features.rst > +++ b/doc/guides/nics/features.rst > @@ -848,6 +848,12 @@ rv64 > Support 64-bit RISC-V architecture. > > > +LoongArch64 > +----------- > + > +Support 64-bit LoongArch architecture. > + > + Idem. > x86-32 > ------ > > diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini > index d1db0c256a..8bbc4600bd 100644 > --- a/doc/guides/nics/features/default.ini > +++ b/doc/guides/nics/features/default.ini > @@ -73,6 +73,7 @@ ARMv7 = > ARMv8 = > Power8 = > rv64 = > +LoongArch64 = Idem. > x86-32 = > x86-64 = > Usage doc = > diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst > index 8c021cf050..126160d683 100644 > --- a/doc/guides/rel_notes/release_22_11.rst > +++ b/doc/guides/rel_notes/release_22_11.rst > @@ -55,6 +55,12 @@ New Features > Also, make sure to start the actual text at the margin. > ======================================================= > > +* **Added initial LoongArch architecture support.** > + > + Added EAL implementation for LoongArch architecture. The initial devices > + the porting was tested on included Loongson 3A5000, Loongson 3C5000 and > + Loongson 3C5000L. In theory this implementation should work with any target > + based on ``LoongArch`` ISA. Sections in the release notes are separated with two empty lines. > > Removed Items > ------------- [snip] > diff --git a/lib/eal/loongarch/include/meson.build b/lib/eal/loongarch/include/meson.build > new file mode 100644 > index 0000000000..d5699c5373 > --- /dev/null > +++ b/lib/eal/loongarch/include/meson.build > @@ -0,0 +1,21 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2022 Loongson Technology Corporation Limited > + > +arch_headers = files( > + 'rte_atomic.h', > + 'rte_byteorder.h', > + 'rte_cpuflags.h', > + 'rte_cycles.h', > + 'rte_io.h', > + 'rte_mcslock.h', > + 'rte_memcpy.h', > + 'rte_pause.h', > + 'rte_pflock.h', > + 'rte_power_intrinsics.h', > + 'rte_prefetch.h', > + 'rte_rwlock.h', > + 'rte_spinlock.h', > + 'rte_ticketlock.h', > + 'rte_vect.h', > +) msclock, pflock and ticketlock are now non-arch specific headers. They can be removed from the loongarch include directory. See: e5e613f05b8c ("eal: remove unused arch-specific headers for locks") > +install_headers(arch_headers, subdir: get_option('include_subdir_arch')) > diff --git a/lib/eal/loongarch/include/rte_atomic.h b/lib/eal/loongarch/include/rte_atomic.h > new file mode 100644 > index 0000000000..8e007e7f76 > --- /dev/null > +++ b/lib/eal/loongarch/include/rte_atomic.h > @@ -0,0 +1,253 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Loongson Technology Corporation Limited > + */ > + > +#ifndef _RTE_ATOMIC_LOONGARCH_H_ > +#define _RTE_ATOMIC_LOONGARCH_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include > +#include "generic/rte_atomic.h" > + > +/** > + * LoongArch Synchronize > + */ > +static inline void synchronize(void) This name is too generic. Plus all memory barriers are implemented in the same way. I suggest defining rte_mb() as this inline helper. > +{ > + __asm__ __volatile__("dbar 0":::"memory"); > +} > + > +/** > + * General memory barrier. > + * > + * Guarantees that the LOAD and STORE operations generated before the > + * barrier occur before the LOAD and STORE operations generated after. > + * This function is architecture dependent. Those comments are copied from the generic header which is used for doxygen, but you don't need them in the arch specific header. Please remove. > + */ > +#define rte_mb() synchronize() > + > +/** > + * Write memory barrier. > + * > + * Guarantees that the STORE operations generated before the barrier > + * occur before the STORE operations generated after. > + * This function is architecture dependent. > + */ > +#define rte_wmb() synchronize() > + > +/** > + * Read memory barrier. > + * > + * Guarantees that the LOAD operations generated before the barrier > + * occur before the LOAD operations generated after. > + * This function is architecture dependent. > + */ > +#define rte_rmb() synchronize() > + > +#define rte_smp_mb() rte_mb() > + > +#define rte_smp_wmb() rte_mb() > + > +#define rte_smp_rmb() rte_mb() > + > +#define rte_io_mb() rte_mb() > + > +#define rte_io_wmb() rte_mb() > + > +#define rte_io_rmb() rte_mb() > + > +static __rte_always_inline void > +rte_atomic_thread_fence(int memorder) > +{ > + __atomic_thread_fence(memorder); > +} > + > +#ifndef RTE_FORCE_INTRINSICS Unless I missed something, there is no loongarch specific implementations when RTE_FORCE_INTRINSICS is unset. What is the point of supporting the case where RTE_FORCE_INTRINSICS is undefined? If there is no need, force-set RTE_FORCE_INTRINSICS in config and then update headers accordingly. > +/*------------------------- 16 bit atomic operations -------------------------*/ > +static inline int > +rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src) > +{ > + return __sync_bool_compare_and_swap(dst, exp, src); > +} > + > +static inline uint16_t > +rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val) > +{ > +#if defined(__clang__) > + return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); > +#else > + return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST); > +#endif > +} > + > +static inline void > +rte_atomic16_inc(rte_atomic16_t *v) > +{ > + rte_atomic16_add(v, 1); > +} > + > +static inline void > +rte_atomic16_dec(rte_atomic16_t *v) > +{ > + rte_atomic16_sub(v, 1); > +} > + > +static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v) > +{ > + return __sync_add_and_fetch(&v->cnt, 1) == 0; > +} > + > +static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v) > +{ > + return __sync_sub_and_fetch(&v->cnt, 1) == 0; > +} > + > +static inline int rte_atomic16_test_and_set(rte_atomic16_t *v) > +{ > + return rte_atomic16_cmpset((volatile uint16_t *)&v->cnt, 0, 1); > +} > + > +/*------------------------- 32 bit atomic operations -------------------------*/ > +static inline int > +rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src) > +{ > + return __sync_bool_compare_and_swap(dst, exp, src); > +} > + > +static inline uint32_t > +rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val) > +{ > +#if defined(__clang__) > + return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); > +#else > + return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST); > +#endif > +} > + > +static inline void > +rte_atomic32_inc(rte_atomic32_t *v) > +{ > + rte_atomic32_add(v, 1); > +} > + > +static inline void > +rte_atomic32_dec(rte_atomic32_t *v) > +{ > + rte_atomic32_sub(v, 1); > +} > + > +static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v) > +{ > + return __sync_add_and_fetch(&v->cnt, 1) == 0; > +} > + > +static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v) > +{ > + return __sync_sub_and_fetch(&v->cnt, 1) == 0; > +} > + > +static inline int rte_atomic32_test_and_set(rte_atomic32_t *v) > +{ > + return rte_atomic32_cmpset((volatile uint32_t *)&v->cnt, 0, 1); > +} > + > +/*------------------------- 64 bit atomic operations -------------------------*/ > +static inline int > +rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src) > +{ > + return __sync_bool_compare_and_swap(dst, exp, src); > +} > + > +static inline uint64_t > +rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val) > +{ > +#if defined(__clang__) > + return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); > +#else > + return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST); > +#endif > +} > + > +static inline void > +rte_atomic64_init(rte_atomic64_t *v) > +{ > + v->cnt = 0; > +} > + > +static inline int64_t > +rte_atomic64_read(rte_atomic64_t *v) > +{ > + return v->cnt; > +} > + > +static inline void > +rte_atomic64_set(rte_atomic64_t *v, int64_t new_value) > +{ > + v->cnt = new_value; > +} > + > +static inline void > +rte_atomic64_add(rte_atomic64_t *v, int64_t inc) > +{ > + __sync_fetch_and_add(&v->cnt, inc); > +} > + > +static inline void > +rte_atomic64_sub(rte_atomic64_t *v, int64_t dec) > +{ > + __sync_fetch_and_sub(&v->cnt, dec); > +} > + > +static inline void > +rte_atomic64_inc(rte_atomic64_t *v) > +{ > + rte_atomic64_add(v, 1); > +} > + > +static inline void > +rte_atomic64_dec(rte_atomic64_t *v) > +{ > + rte_atomic64_sub(v, 1); > +} > + > +static inline int64_t > +rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc) > +{ > + return __sync_add_and_fetch(&v->cnt, inc); > +} > + > +static inline int64_t > +rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec) > +{ > + return __sync_sub_and_fetch(&v->cnt, dec); > +} > + > +static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v) > +{ > + return rte_atomic64_add_return(v, 1) == 0; > +} > + > +static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v) > +{ > + return rte_atomic64_sub_return(v, 1) == 0; > +} > + > +static inline int rte_atomic64_test_and_set(rte_atomic64_t *v) > +{ > + return rte_atomic64_cmpset((volatile uint64_t *)&v->cnt, 0, 1); > +} > + > +static inline void rte_atomic64_clear(rte_atomic64_t *v) > +{ > + rte_atomic64_set(v, 0); > +} > +#endif > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_ATOMIC_LOONGARCH_H_ */ [snip] > diff --git a/lib/eal/loongarch/include/rte_cycles.h b/lib/eal/loongarch/include/rte_cycles.h > new file mode 100644 > index 0000000000..1f8f957faf > --- /dev/null > +++ b/lib/eal/loongarch/include/rte_cycles.h > @@ -0,0 +1,53 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Loongson Technology Corporation Limited > + */ > + > +#ifndef _RTE_CYCLES_LOONGARCH_H_ > +#define _RTE_CYCLES_LOONGARCH_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include "generic/rte_cycles.h" > + > +static inline uint64_t > +get_cycle_count(void) Same comment as earlier for memory barriers, this name is too generic. Prefer rte_rdtsc() as the name for this helper. > +{ > + uint64_t count; > + > + __asm__ __volatile__ ( > + "rdtime.d %[cycles], $zero\n" > + : [cycles] "=r" (count) > + :: > + ); > + return count; > +} > + > +/** > + * Read the time base register. > + * > + * @return > + * The time base for this lcore. > + */ > +static inline uint64_t > +rte_rdtsc(void) > +{ > + return get_cycle_count(); > +} > + > +static inline uint64_t > +rte_rdtsc_precise(void) > +{ > + rte_mb(); > + return rte_rdtsc(); > +} > + > +static inline uint64_t > +rte_get_tsc_cycles(void) { return rte_rdtsc(); } > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_CYCLES_LOONGARCH_H_ */ [snip] > diff --git a/lib/eal/loongarch/rte_cpuflags.c b/lib/eal/loongarch/rte_cpuflags.c > new file mode 100644 > index 0000000000..4abcd0fdb3 > --- /dev/null > +++ b/lib/eal/loongarch/rte_cpuflags.c > @@ -0,0 +1,94 @@ > +/* > + * SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Loongson Technology Corporation Limited > + */ > + > +#include "rte_cpuflags.h" > + > +#include > +#include > +#include > +#include > +#include > + > +/* Symbolic values for the entries in the auxiliary table */ > +#define AT_HWCAP 16 > +#define AT_HWCAP2 26 AT_HWCAP2 is not used. > + > +/* software based registers */ > +enum cpu_register_t { > + REG_NONE = 0, > + REG_HWCAP, > + REG_MAX > +}; > + > +typedef uint32_t hwcap_registers_t[REG_MAX]; > + > +struct feature_entry { > + uint32_t reg; > + uint32_t bit; > +#define CPU_FLAG_NAME_MAX_LEN 64 > + char name[CPU_FLAG_NAME_MAX_LEN]; > +}; > + > +#define FEAT_DEF(name, reg, bit) \ > + [RTE_CPUFLAG_##name] = {reg, bit, #name}, > + > +const struct feature_entry rte_cpu_feature_table[] = { > + FEAT_DEF(CPUCFG, REG_HWCAP, 0) > + FEAT_DEF(LAM, REG_HWCAP, 1) > + FEAT_DEF(UAL, REG_HWCAP, 2) > + FEAT_DEF(FPU, REG_HWCAP, 3) > + FEAT_DEF(LSX, REG_HWCAP, 4) > + FEAT_DEF(LASX, REG_HWCAP, 5) > + FEAT_DEF(CRC32, REG_HWCAP, 6) > + FEAT_DEF(COMPLEX, REG_HWCAP, 7) > + FEAT_DEF(CRYPTO, REG_HWCAP, 8) > + FEAT_DEF(LVZ, REG_HWCAP, 9) > + FEAT_DEF(LBT_X86, REG_HWCAP, 10) > + FEAT_DEF(LBT_ARM, REG_HWCAP, 11) > + FEAT_DEF(LBT_MIPS, REG_HWCAP, 12) > +}; > + > +/* > + * Read AUXV software register and get cpu features for LoongArch > + */ > +static void > +rte_cpu_get_features(hwcap_registers_t out) > +{ > + out[REG_HWCAP] = rte_cpu_getauxval(AT_HWCAP); > +} > + > +/* > + * Checks if a particular flag is available on current machine. > + */ > +int > +rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature) > +{ > + const struct feature_entry *feat; > + hwcap_registers_t regs = {0}; > + > + if (feature >= RTE_CPUFLAG_NUMFLAGS) > + return -ENOENT; > + > + feat = &rte_cpu_feature_table[feature]; > + if (feat->reg == REG_NONE) > + return -EFAULT; > + > + rte_cpu_get_features(regs); > + return (regs[feat->reg] >> feat->bit) & 1; > +} > + > +const char * > +rte_cpu_get_flag_name(enum rte_cpu_flag_t feature) > +{ > + if (feature >= RTE_CPUFLAG_NUMFLAGS) > + return NULL; > + return rte_cpu_feature_table[feature].name; > +} > + > +void > +rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics) > +{ > + memset(intrinsics, 0, sizeof(*intrinsics)); > +} [snip] -- David Marchand