> why is it done in a scalar way instead of using `__riscv_vsrl_vx_u32m1()?` I assume you're relying on the compiler here?
I don't know the exact reason, but based on experience, using indexed loads tends to be slower for small-scale and low-computation cases. So I've tried both methods.
In this case, if using `vsrl`, it would require `__riscv_vluxei32_v_u32m1`, which is much slower.
```
vuint32m1_t vip_shifted = __riscv_vsll_vx_u32m1(__riscv_vsrl_vx_u32m1(__riscv_vle32_v_u32m1((const uint32_t *)&ip, vl), 8, vl), 2, vl);
vuint32m1_t vtbl_entry = __riscv_vluxei32_v_u32m1(
(const uint32_t *)(lpm->tbl24), vip_shifted, vl);
```
> have you redefined the xmm_t type for proper index addressing?
It is in `eal/riscv/include/rte_vect.h:`
```
typedef int32_t xmm_t __attribute__((vector_size(16)));
```
> I'd recommend that you use FIB to select an implementation at runtime. All the rest LPM vector x4 implementations are done this way, and their code is inlined.
> Also, please consider writing a slightly more informative and explanatory commit message.
I agree that the FIB approach is clearly better here, but adopting this method would require changing the function initialization logic for all architectures in LPM, as well as updating the relevant structures.
I'm not sure it's worth doing right now, since this commit is intended to be just a small change for RISC-V. I'm more inclined to follow the existing structure and avoid touching other architectures' code.
Would it be acceptable to leave this kind of refactoring for the future?
If you're certain it should be done now, I'll make the changes. For now, I've only updated the commit message to include this idea (v2).
-----原始邮件-----
发件人:"Medvedkin, Vladimir" <vladimir.medvedkin@intel.com>
发送时间:2025-05-30 21:13:57 (星期五)
收件人: uk7b@foxmail.com, dev@dpdk.org
抄送: sunyuechi <sunyuechi@iscas.ac.cn>, "Thomas Monjalon" <thomas@monjalon.net>, "Bruce Richardson" <bruce.richardson@intel.com>, "Stanislaw Kardach" <stanislaw.kardach@gmail.com>
主题: Re: [PATCH 2/3] lib/lpm: R-V V rte_lpm_lookupx4
Hi Sunyuechi,
On 28/05/2025 18:00, uk7b@foxmail.com wrote:
From: sunyuechi <sunyuechi@iscas.ac.cn> bpi-f3: scalar: 5.7 cycles rvv: 2.4 cycles Maybe runtime detection in LPM should be added for all architectures, but this commit is only about the RVV part.I would advise you to look into the FIB library, it has exactly what you are looking for.
Also, please consider writing a slightly more informative and explanatory commit message.
<snip>Signed-off-by: sunyuechi <sunyuechi@iscas.ac.cn> --- MAINTAINERS | 2 + lib/lpm/meson.build | 1 + lib/lpm/rte_lpm.h | 2 + lib/lpm/rte_lpm_rvv.h | 91 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 96 insertions(+) create mode 100644 lib/lpm/rte_lpm_rvv.h+static inline void rte_lpm_lookupx4_rvv( + const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], uint32_t defv) +{ + size_t vl = 4; + + const uint32_t *tbl24_p = (const uint32_t *)lpm->tbl24; + uint32_t tbl_entries[4] = { + tbl24_p[((uint32_t)ip[0]) >> 8], + tbl24_p[((uint32_t)ip[1]) >> 8], + tbl24_p[((uint32_t)ip[2]) >> 8], + tbl24_p[((uint32_t)ip[3]) >> 8], + };I'm not an expert in RISC-V, but why is it done in a scalar way instead of using __riscv_vsrl_vx_u32m1()? I assume you're relying on the compiler here?
Also, have you redefined the xmm_t type for proper index addressing?
<snip>+ vuint32m1_t vtbl_entry = __riscv_vle32_v_u32m1(tbl_entries, vl); + + vbool32_t mask = __riscv_vmseq_vx_u32m1_b32( + __riscv_vand_vx_u32m1(vtbl_entry, RTE_LPM_VALID_EXT_ENTRY_BITMASK, vl), + RTE_LPM_VALID_EXT_ENTRY_BITMASK, vl);As I mentioned earlier, I'd recommend that you use FIB to select an implementation at runtime. All the rest LPM vector x4 implementations are done this way, and their code is inlined.+ +static inline void rte_lpm_lookupx4( + const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], uint32_t defv) +{ + lpm_lookupx4_impl(lpm, ip, hop, defv); +} + +RTE_INIT(rte_lpm_init_alg) +{ + lpm_lookupx4_impl = rte_cpu_get_flag_enabled(RTE_CPUFLAG_RISCV_ISA_V) + ? rte_lpm_lookupx4_rvv + : rte_lpm_lookupx4_scalar; +}+ +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_LPM_RVV_H_ */-- Regards, Vladimir