Hi Sunyuechi, On 04/06/2025 12:39, 孙越池 wrote: > > why is it done in a scalar way instead of using > `__riscv_vsrl_vx_u32m1()?` I assume you're relying on the compiler here? > > I don't know the exact reason, but based on experience, using indexed > loads tends to be slower for small-scale and low-computation cases. So > I've tried both methods. > In this case, if using `vsrl`, it would require > `__riscv_vluxei32_v_u32m1`, which is much slower. > > ``` > vuint32m1_t vip_shifted = > __riscv_vsll_vx_u32m1(__riscv_vsrl_vx_u32m1(__riscv_vle32_v_u32m1((const > uint32_t *)&ip, vl), 8, vl), 2, vl); > vuint32m1_t vtbl_entry = __riscv_vluxei32_v_u32m1( >     (const uint32_t *)(lpm->tbl24), vip_shifted, vl); > ``` > > > have you redefined the xmm_t type for proper index addressing? > > It is in `eal/riscv/include/rte_vect.h:` > > ``` > typedef int32_t xmm_t __attribute__((vector_size(16))); > ``` > > > I'd recommend that you use FIB to select an implementation at > runtime. All the rest LPM vector x4 implementations are done this way, > and their code is inlined. > > Also, please consider writing a slightly more informative and > explanatory commit message. The commit message still looks uninformative to me: >lpm_perf_autotest on BPI-F3 we have no idea what's that > scalar: 5.7 cycles I'm not sure we want to have this information in commit message as well, because it is useless. Cycles depends on so much variable parts - what freq of the CPU was, what speed of memory, size of caches, and so on. This information is irrelevant and become obsolete pretty fast. From the latest commit: >The best way ... However, ... Therefore, ... this commit does not modify >Unifying the code style between lpm and fib may be worth considering in the future. I don't think this is a good idea to put into the commit message information about what was NOT done. You should put all this information (platform you were running, performance, implementation considerations and thoughts) into the patch notes. > > I agree that the FIB approach is clearly better here, but adopting > this method would require changing the function initialization logic > for all architectures in LPM, as well as updating the relevant structures. > > I'm not sure it's worth doing right now, since this commit is intended > to be just a small change for RISC-V. I'm more inclined to follow the > existing structure and avoid touching other architectures' code. > Would it be acceptable to leave this kind of refactoring for the future? > > If you're certain it should be done now, I'll make the changes. For > now, I've only updated the commit message to include this idea (v2). > > I'm not talking about adopting the FIB approach to the LPM. Instead, I suggested keeping LPM code consistent and leaving your implementation as a static inline function. And if you want to have runtime CPU flags check - you're welcome to do so in the FIB. > > > -----原始邮件----- > *发件人:*"Medvedkin, Vladimir" > *发送时间:*2025-05-30 21:13:57 (星期五) > *收件人:* uk7b@foxmail.com, dev@dpdk.org > *抄送:* sunyuechi , "Thomas Monjalon" > , "Bruce Richardson" > , "Stanislaw Kardach" > > *主题:* Re: [PATCH 2/3] lib/lpm: R-V V rte_lpm_lookupx4 > > Hi c, > > > On 28/05/2025 18:00, uk7b@foxmail.com wrote: >> From: sunyuechi bpi-f3: >> scalar: 5.7 cycles >> rvv: 2.4 cycles >> >> Maybe runtime detection in LPM should be added for all architectures, >> but this commit is only about the RVV part. > > Iwouldadviseyouto lookintothe FIBlibrary,ithasexactlywhatyouare > lookingfor. > > Also,pleaseconsiderwritinga > slightlymoreinformativeandexplanatorycommit message. > >> Signed-off-by: sunyuechi --- >> MAINTAINERS | 2 + >> lib/lpm/meson.build | 1 + >> lib/lpm/rte_lpm.h | 2 + >> lib/lpm/rte_lpm_rvv.h | 91 +++++++++++++++++++++++++++++++++++++++++++ >> 4 files changed, 96 insertions(+) >> create mode 100644 lib/lpm/rte_lpm_rvv.h >> > >> +static inline void rte_lpm_lookupx4_rvv( >> + const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], uint32_t defv) >> +{ >> + size_t vl = 4; >> + >> + const uint32_t *tbl24_p = (const uint32_t *)lpm->tbl24; >> + uint32_t tbl_entries[4] = { >> + tbl24_p[((uint32_t)ip[0]) >> 8], >> + tbl24_p[((uint32_t)ip[1]) >> 8], >> + tbl24_p[((uint32_t)ip[2]) >> 8], >> + tbl24_p[((uint32_t)ip[3]) >> 8], >> + }; > > I'm notan expertinRISC-V,butwhyis itdonein a > scalarwayinsteadofusing__riscv_vsrl_vx_u32m1()? Iassumeyou're > relyingonthe compilerhere? > > Also,have youredefinedthe xmm_t typeforproperindexaddressing? > >> + vuint32m1_t vtbl_entry = __riscv_vle32_v_u32m1(tbl_entries, vl); >> + >> + vbool32_t mask = __riscv_vmseq_vx_u32m1_b32( >> + __riscv_vand_vx_u32m1(vtbl_entry, RTE_LPM_VALID_EXT_ENTRY_BITMASK, vl), >> + RTE_LPM_VALID_EXT_ENTRY_BITMASK, vl); > >> + >> +static inline void rte_lpm_lookupx4( >> + const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], uint32_t defv) >> +{ >> + lpm_lookupx4_impl(lpm, ip, hop, defv); >> +} >> + >> +RTE_INIT(rte_lpm_init_alg) >> +{ >> + lpm_lookupx4_impl = rte_cpu_get_flag_enabled(RTE_CPUFLAG_RISCV_ISA_V) >> + ? rte_lpm_lookupx4_rvv >> + : rte_lpm_lookupx4_scalar; >> +} > AsImentionedearlier,I'd recommendthat youuseFIBtoselectan > implementationatruntime. All the rest LPM vector x4 > implementations are done this way, and their code is inlined. >> + >> +#ifdef __cplusplus >> +} >> +#endif >> + >> +#endif /* _RTE_LPM_RVV_H_ */ > > -- > Regards, > Vladimir > -- Regards, Vladimir