* [PATCH v2 0/5] bpf: simple tests and fixes [not found] <20251110153046.63518-1-marat.khalili@huawei.com> @ 2025-12-16 18:20 ` Marat Khalili 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili ` (5 more replies) 0 siblings, 6 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb; +Cc: dev, stable, konstantin.ananyev Add simple unit-style tests for rte_bpf_load, and fix some minor discovered bugs. v2: * Moved new tests back into test_bpf.c * Changed library macros RTE_SHIFT_VALxx to handle variable first arguments instead of introducing new macros. * Added another test and fix, for conditional jump as first instruction. Marat Khalili (5): eal: variable first arguments of RTE_SHIFT_VALxx bpf: fix signed shift overflows in ARM JIT bpf: disallow empty program bpf: make add/subtract one program validate bpf: fix BPF validation w/ conditional jump first app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ lib/bpf/bpf_jit_arm64.c | 162 +++++++++++----------- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 48 ++++--- lib/eal/include/rte_bitops.h | 4 +- 5 files changed, 364 insertions(+), 106 deletions(-) -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-17 9:25 ` Morten Brørup 2025-12-16 18:20 ` [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT Marat Khalili ` (4 subsequent siblings) 5 siblings, 1 reply; 17+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Jack Bond-Preston Cc: dev, stable, konstantin.ananyev Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying UINT32_C or UINT64_C correspondingly to its first argument. As a consequence first argument had to be a constant. Replace UINT32_C and UINT64_C with casts to uint32_t and uint64_t to allow these arguments be variable. For constants the result should be the same. (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, but I think most users of RTE_SHIFT_VAL64 expect the result to be uint64_t.) Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- lib/eal/include/rte_bitops.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h index 2d1b9d281c..aa6ac73abb 100644 --- a/lib/eal/include/rte_bitops.h +++ b/lib/eal/include/rte_bitops.h @@ -51,7 +51,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (32 - width of val). */ -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) /** * Get the uint64_t shifted value. @@ -61,7 +61,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (64 - width of val). */ -#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr)) +#define RTE_SHIFT_VAL64(val, nr) ((uint64_t)(val) << (nr)) /** * Generate a contiguous 32-bit mask -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2025-12-17 9:25 ` Morten Brørup 0 siblings, 0 replies; 17+ messages in thread From: Morten Brørup @ 2025-12-17 9:25 UTC (permalink / raw) To: Marat Khalili, stephen, jerinjacobk, Jack Bond-Preston Cc: dev, stable, konstantin.ananyev > From: Marat Khalili [mailto:marat.khalili@huawei.com] > Sent: Tuesday, 16 December 2025 19.21 > > Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying > UINT32_C or UINT64_C correspondingly to its first argument. As a > consequence first argument had to be a constant. Replace UINT32_C and > UINT64_C with casts to uint32_t and uint64_t to allow these arguments > be > variable. For constants the result should be the same. > > (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, > but > I think most users of RTE_SHIFT_VAL64 expect the result to be > uint64_t.) Interesting detail. And yes, I agree about expectations. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- > lib/eal/include/rte_bitops.h | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/lib/eal/include/rte_bitops.h > b/lib/eal/include/rte_bitops.h > index 2d1b9d281c..aa6ac73abb 100644 > --- a/lib/eal/include/rte_bitops.h > +++ b/lib/eal/include/rte_bitops.h > @@ -51,7 +51,7 @@ extern "C" { > * @param nr > * The shift number in range of 0 to (32 - width of val). > */ > -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) > +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) > > /** > * Get the uint64_t shifted value. > @@ -61,7 +61,7 @@ extern "C" { > * @param nr > * The shift number in range of 0 to (64 - width of val). > */ > -#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr)) > +#define RTE_SHIFT_VAL64(val, nr) ((uint64_t)(val) << (nr)) > > /** > * Generate a contiguous 32-bit mask > -- > 2.43.0 Reviewed-by: Morten Brørup <mb@smartsharesystems.com> ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-17 9:49 ` Morten Brørup 2025-12-16 18:20 ` [PATCH v2 3/5] bpf: disallow empty program Marat Khalili ` (3 subsequent siblings) 5 siblings, 1 reply; 17+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Wathsala Vithanage, Konstantin Ananyev Cc: dev, stable Left shifts of integer literals and bool values overwriting the sign bit were used multiple times in bpf_jit_arm64.c. E.g.: insn = (!!is64) << 31; where is64 has type bool (double bang is a no-op here). The operand of left shift was promoted to type int, which when 32-bit wide cannot represent the result. Similarly literal integers have int type by default. Sanitizer produced the following diagnostic during runtime (for various lines): lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' To fix the issue use RTE_BIT32 and similar macros instead. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- lib/bpf/bpf_jit_arm64.c | 162 ++++++++++++++++++++-------------------- 1 file changed, 81 insertions(+), 81 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 96b8cd2e03..76df1e4ba1 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -28,7 +28,7 @@ #define A64_ZR 31 #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) >> (n))) -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) struct ebpf_a64_map { uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ @@ -238,12 +238,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, uint32_t insn, imm; imm = mask_imm(12, imm12); - insn = (!!is64) << 31; - insn |= (!!sub) << 30; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(sub, 30); insn |= 0x11000000; insn |= rd; - insn |= rn << 5; - insn |= imm << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(imm, 10); emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); @@ -279,16 +279,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, { uint32_t insn; - insn = (!!load) << 22; - insn |= (!!pre_index) << 24; + insn = RTE_SHIFT_VAL32(load, 22); + insn |= RTE_SHIFT_VAL32(pre_index, 24); insn |= 0xa8800000; insn |= rt; - insn |= rn << 5; - insn |= rt2 << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(rt2, 10); if (push) - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ + insn |= RTE_SHIFT_VAL32(0x7e, 15); /* 0x7e means -2 with imm7 */ else - insn |= 0x2 << 15; + insn |= RTE_SHIFT_VAL32(0x2, 15); emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || check_reg(rt2)); @@ -317,11 +317,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t type, { uint32_t insn; - insn = (!!is64) << 31; - insn |= type << 29; - insn |= 0x25 << 23; - insn |= (shift/16) << 21; - insn |= imm16 << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(type, 29); + insn |= RTE_SHIFT_VAL32(0x25, 23); + insn |= RTE_SHIFT_VAL32(shift/16, 21); + insn |= RTE_SHIFT_VAL32(imm16, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); @@ -334,7 +334,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint32_t val) uint16_t lower = val & 0xffff; /* Positive number */ - if ((val & 1UL << 31) == 0) { + if ((val & RTE_BIT32(31)) == 0) { mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); if (upper) mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); @@ -393,21 +393,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, uint8_t rt, uint8_t rn, uint8_t rm, { uint32_t insn; - insn = 0x1c1 << 21; + insn = RTE_SHIFT_VAL32(0x1c1, 21); if (load) - insn |= 1 << 22; + insn |= RTE_BIT32(22); if (sz == BPF_B) - insn |= 0 << 30; + insn |= RTE_SHIFT_VAL32(0, 30); else if (sz == BPF_H) - insn |= 1 << 30; + insn |= RTE_SHIFT_VAL32(1, 30); else if (sz == BPF_W) - insn |= 2 << 30; + insn |= RTE_SHIFT_VAL32(2, 30); else if (sz == EBPF_DW) - insn |= 3 << 30; + insn |= RTE_SHIFT_VAL32(3, 30); - insn |= rm << 16; - insn |= 0x1a << 10; /* LSL and S = 0 */ - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1a, 10); /* LSL and S = 0 */ + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || check_reg(rm) || @@ -436,10 +436,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 21; /* shift == 0 */ - insn |= rm << 16; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 21); /* shift == 0 */ + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -468,11 +468,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rm) { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= A64_ZR << 10; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(A64_ZR, 10); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -489,11 +489,11 @@ emit_data_process_two_src(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd6 << 21; - insn |= rm << 16; - insn |= op << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd6, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(op, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -532,14 +532,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; + insn = RTE_SHIFT_VAL32(is64, 31); if (insn) - insn |= 1 << 22; /* Set N bit when is64 is set */ - insn |= op << 29; - insn |= 0x26 << 23; - insn |= immr << 16; - insn |= imms << 10; - insn |= rn << 5; + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x26, 23); + insn |= RTE_SHIFT_VAL32(immr, 16); + insn |= RTE_SHIFT_VAL32(imms, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || @@ -578,11 +578,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 29; - insn |= 0x50 << 21; - insn |= rm << 16; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x50, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -612,12 +612,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= 0x1 << 15; - insn |= ra << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1, 15); + insn |= RTE_SHIFT_VAL32(ra, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_reg(rm) || @@ -638,7 +638,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) uint32_t insn; insn = 0xd63f0000; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); } @@ -669,22 +669,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) uint32_t insn; insn = 0xdac00000; - insn |= rd << 5; + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; switch (imm) { case 16: - insn |= 1 << 10; + insn |= RTE_SHIFT_VAL32(1, 10); emit_insn(ctx, insn, check_reg(rd)); emit_zero_extend(ctx, rd, 16); break; case 32: - insn |= 2 << 10; + insn |= RTE_SHIFT_VAL32(2, 10); emit_insn(ctx, insn, check_reg(rd)); /* Upper 32 bits already cleared */ break; case 64: - insn |= 3 << 10; + insn |= RTE_SHIFT_VAL32(3, 10); emit_insn(ctx, insn, check_reg(rd)); break; default: @@ -933,9 +933,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = (!!is64) << 31; - insn |= 0x35 << 24; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0x35, 24); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); @@ -947,7 +947,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) uint32_t insn, imm; imm = mask_imm(26, imm26); - insn = 0x5 << 26; + insn = RTE_SHIFT_VAL32(0x5, 26); insn |= imm; emit_insn(ctx, insn, check_imm(26, imm26)); @@ -971,9 +971,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) uint32_t insn; insn = 0xb820001f; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); } @@ -984,8 +984,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) uint32_t insn; insn = 0x885f7c00; - insn |= (!!is64) << 30; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); @@ -998,9 +998,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, uint32_t insn; insn = 0x88007c00; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || check_reg(rn)); @@ -1051,9 +1051,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t insn; insn = opc; - insn |= (!!is64) << 31; - insn |= rm << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); } @@ -1076,8 +1076,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t cond, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = 0x15 << 26; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(0x15, 26); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= cond; emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); @@ -1301,7 +1301,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) break; /* dst = imm64 */ case (BPF_LD | BPF_IMM | EBPF_DW): - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; + u64 = RTE_SHIFT_VAL64(ins[1].imm, 32) | (uint32_t)imm; emit_mov_imm(ctx, 1, dst, u64); i++; break; -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT 2025-12-16 18:20 ` [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-12-17 9:49 ` Morten Brørup 0 siblings, 0 replies; 17+ messages in thread From: Morten Brørup @ 2025-12-17 9:49 UTC (permalink / raw) To: Marat Khalili, stephen, jerinjacobk, Wathsala Vithanage, Konstantin Ananyev Cc: dev, stable > From: Marat Khalili [mailto:marat.khalili@huawei.com] > Sent: Tuesday, 16 December 2025 19.21 > > Left shifts of integer literals and bool values overwriting the sign > bit > were used multiple times in bpf_jit_arm64.c. E.g.: > > insn = (!!is64) << 31; > > where is64 has type bool (double bang is a no-op here). Yes, multiple online sources [1] confirms that the bool type can only hold two values, 0 and 1, and converting any non-zero value to bool results in 1. So, using !!x where x is a bool is literally a no-op, since x - being a bool - could already only hold the values 0 and 1. [1]: E.g. https://www.gnu.org/software/c-intro-and-ref/manual/html_node/Boolean-Type.html https://embedded.fm/blog/2016/5/3/ew-bools > The operand of > left shift was promoted to type int, which when 32-bit wide cannot > represent the result. Similarly literal integers have int type by > default. Sanitizer produced the following diagnostic during runtime > (for various lines): > > lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by > 31 > places cannot be represented in type 'int' > > To fix the issue use RTE_BIT32 and similar macros instead. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > --- > lib/bpf/bpf_jit_arm64.c | 162 ++++++++++++++++++++-------------------- > 1 file changed, 81 insertions(+), 81 deletions(-) > > diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c > index 96b8cd2e03..76df1e4ba1 100644 > --- a/lib/bpf/bpf_jit_arm64.c > +++ b/lib/bpf/bpf_jit_arm64.c > @@ -28,7 +28,7 @@ > #define A64_ZR 31 > > #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) > >> (n))) > -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) > +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) > > struct ebpf_a64_map { > uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ > @@ -238,12 +238,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool > is64, bool sub, uint8_t rd, > uint32_t insn, imm; > > imm = mask_imm(12, imm12); > - insn = (!!is64) << 31; > - insn |= (!!sub) << 30; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(sub, 30); As mentioned above, !!is64 and !!sub are literally no-ops, so it is safe to remove them. > insn |= 0x11000000; > insn |= rd; > - insn |= rn << 5; > - insn |= imm << 10; > + insn |= RTE_SHIFT_VAL32(rn, 5); > + insn |= RTE_SHIFT_VAL32(imm, 10); > > emit_insn(ctx, insn, > check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); > @@ -279,16 +279,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t > rt, uint8_t rt2, uint8_t rn, > { > uint32_t insn; > > - insn = (!!load) << 22; > - insn |= (!!pre_index) << 24; > + insn = RTE_SHIFT_VAL32(load, 22); > + insn |= RTE_SHIFT_VAL32(pre_index, 24); > insn |= 0xa8800000; This constant has the high bit set, consider: insn |= UINT32_C(0xa8800000); And for consistency, consider also doing it for constants where the high bit isn't set. Throughout the file. > insn |= rt; > - insn |= rn << 5; > - insn |= rt2 << 10; > + insn |= RTE_SHIFT_VAL32(rn, 5); > + insn |= RTE_SHIFT_VAL32(rt2, 10); > if (push) > - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ > + insn |= RTE_SHIFT_VAL32(0x7e, 15); /* 0x7e means -2 with > imm7 */ > else > - insn |= 0x2 << 15; > + insn |= RTE_SHIFT_VAL32(0x2, 15); > > emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || > check_reg(rt2)); > > @@ -317,11 +317,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t type, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= type << 29; > - insn |= 0x25 << 23; > - insn |= (shift/16) << 21; > - insn |= imm16 << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(type, 29); > + insn |= RTE_SHIFT_VAL32(0x25, 23); > + insn |= RTE_SHIFT_VAL32(shift/16, 21); > + insn |= RTE_SHIFT_VAL32(imm16, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); > @@ -334,7 +334,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint32_t val) > uint16_t lower = val & 0xffff; > > /* Positive number */ > - if ((val & 1UL << 31) == 0) { > + if ((val & RTE_BIT32(31)) == 0) { > mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); > if (upper) > mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); > @@ -393,21 +393,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, > uint8_t rt, uint8_t rn, uint8_t rm, > { > uint32_t insn; > > - insn = 0x1c1 << 21; > + insn = RTE_SHIFT_VAL32(0x1c1, 21); > if (load) > - insn |= 1 << 22; > + insn |= RTE_BIT32(22); > if (sz == BPF_B) > - insn |= 0 << 30; > + insn |= RTE_SHIFT_VAL32(0, 30); > else if (sz == BPF_H) > - insn |= 1 << 30; > + insn |= RTE_SHIFT_VAL32(1, 30); > else if (sz == BPF_W) > - insn |= 2 << 30; > + insn |= RTE_SHIFT_VAL32(2, 30); > else if (sz == EBPF_DW) > - insn |= 3 << 30; > + insn |= RTE_SHIFT_VAL32(3, 30); > > - insn |= rm << 16; > - insn |= 0x1a << 10; /* LSL and S = 0 */ > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(0x1a, 10); /* LSL and S = 0 */ > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || > check_reg(rm) || > @@ -436,10 +436,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= op << 21; /* shift == 0 */ > - insn |= rm << 16; > - insn |= rn << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(op, 21); /* shift == 0 */ > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -468,11 +468,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t rm) > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd8 << 21; > - insn |= rm << 16; > - insn |= A64_ZR << 10; > - insn |= rd << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(0xd8, 21); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(A64_ZR, 10); > + insn |= RTE_SHIFT_VAL32(rd, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -489,11 +489,11 @@ emit_data_process_two_src(struct a64_jit_ctx > *ctx, bool is64, uint8_t rd, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd6 << 21; > - insn |= rm << 16; > - insn |= op << 10; > - insn |= rn << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(0xd6, 21); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(op, 10); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -532,14 +532,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > + insn = RTE_SHIFT_VAL32(is64, 31); > if (insn) > - insn |= 1 << 22; /* Set N bit when is64 is set */ > - insn |= op << 29; > - insn |= 0x26 << 23; > - insn |= immr << 16; > - insn |= imms << 10; > - insn |= rn << 5; > + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ > + insn |= RTE_SHIFT_VAL32(op, 29); > + insn |= RTE_SHIFT_VAL32(0x26, 23); > + insn |= RTE_SHIFT_VAL32(immr, 16); > + insn |= RTE_SHIFT_VAL32(imms, 10); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || > @@ -578,11 +578,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= op << 29; > - insn |= 0x50 << 21; > - insn |= rm << 16; > - insn |= rd << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(op, 29); > + insn |= RTE_SHIFT_VAL32(0x50, 21); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(rd, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); > @@ -612,12 +612,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, > uint8_t rd, uint8_t rn, > { > uint32_t insn; > > - insn = (!!is64) << 31; > - insn |= 0xd8 << 21; > - insn |= rm << 16; > - insn |= 0x1 << 15; > - insn |= ra << 10; > - insn |= rn << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(0xd8, 21); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(0x1, 15); > + insn |= RTE_SHIFT_VAL32(ra, 10); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rd; > > emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || > check_reg(rm) || > @@ -638,7 +638,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) > uint32_t insn; > > insn = 0xd63f0000; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(rn, 5); > > emit_insn(ctx, insn, check_reg(rn)); > } > @@ -669,22 +669,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, > int32_t imm) > uint32_t insn; > > insn = 0xdac00000; > - insn |= rd << 5; > + insn |= RTE_SHIFT_VAL32(rd, 5); > insn |= rd; > > switch (imm) { > case 16: > - insn |= 1 << 10; > + insn |= RTE_SHIFT_VAL32(1, 10); > emit_insn(ctx, insn, check_reg(rd)); > emit_zero_extend(ctx, rd, 16); > break; > case 32: > - insn |= 2 << 10; > + insn |= RTE_SHIFT_VAL32(2, 10); > emit_insn(ctx, insn, check_reg(rd)); > /* Upper 32 bits already cleared */ > break; > case 64: > - insn |= 3 << 10; > + insn |= RTE_SHIFT_VAL32(3, 10); > emit_insn(ctx, insn, check_reg(rd)); > break; > default: > @@ -933,9 +933,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, > uint8_t rt, int32_t imm19) > uint32_t insn, imm; > > imm = mask_imm(19, imm19); > - insn = (!!is64) << 31; > - insn |= 0x35 << 24; > - insn |= imm << 5; > + insn = RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(0x35, 24); > + insn |= RTE_SHIFT_VAL32(imm, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); > @@ -947,7 +947,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) > uint32_t insn, imm; > > imm = mask_imm(26, imm26); > - insn = 0x5 << 26; > + insn = RTE_SHIFT_VAL32(0x5, 26); > insn |= imm; > > emit_insn(ctx, insn, check_imm(26, imm26)); > @@ -971,9 +971,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, > uint8_t rs, uint8_t rn) > uint32_t insn; > > insn = 0xb820001f; > - insn |= (!!is64) << 30; > - insn |= rs << 16; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(is64, 30); > + insn |= RTE_SHIFT_VAL32(rs, 16); > + insn |= RTE_SHIFT_VAL32(rn, 5); > > emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); > } > @@ -984,8 +984,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, > uint8_t rt, uint8_t rn) > uint32_t insn; > > insn = 0x885f7c00; > - insn |= (!!is64) << 30; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(is64, 30); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); > @@ -998,9 +998,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, > uint8_t rs, uint8_t rt, > uint32_t insn; > > insn = 0x88007c00; > - insn |= (!!is64) << 30; > - insn |= rs << 16; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(is64, 30); > + insn |= RTE_SHIFT_VAL32(rs, 16); > + insn |= RTE_SHIFT_VAL32(rn, 5); > insn |= rt; > > emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || > check_reg(rn)); > @@ -1051,9 +1051,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, > uint8_t rn, uint8_t rm, > uint32_t insn; > > insn = opc; > - insn |= (!!is64) << 31; > - insn |= rm << 16; > - insn |= rn << 5; > + insn |= RTE_SHIFT_VAL32(is64, 31); > + insn |= RTE_SHIFT_VAL32(rm, 16); > + insn |= RTE_SHIFT_VAL32(rn, 5); > > emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); > } > @@ -1076,8 +1076,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t > cond, int32_t imm19) > uint32_t insn, imm; > > imm = mask_imm(19, imm19); > - insn = 0x15 << 26; > - insn |= imm << 5; > + insn = RTE_SHIFT_VAL32(0x15, 26); > + insn |= RTE_SHIFT_VAL32(imm, 5); > insn |= cond; > > emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); > @@ -1301,7 +1301,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf > *bpf) > break; > /* dst = imm64 */ > case (BPF_LD | BPF_IMM | EBPF_DW): > - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; > + u64 = RTE_SHIFT_VAL64(ins[1].imm, 32) | > (uint32_t)imm; > emit_mov_imm(ctx, 1, dst, u64); > i++; > break; > -- > 2.43.0 With or without suggested changes, Acked-by: Morten Brørup <mb@smartsharesystems.com> ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 3/5] bpf: disallow empty program 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili 2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili 2025-12-16 18:20 ` [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-18 0:54 ` Stephen Hemminger 2025-12-16 18:20 ` [PATCH v2 4/5] bpf: make add/subtract one program validate Marat Khalili ` (2 subsequent siblings) 5 siblings, 1 reply; 17+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev; +Cc: dev, stable Add tests for some simple cases: * Program with no instructions; * Program with only EXIT instruction but no return value set; * Program with return value set but no EXIT instruction; * Minimal valid program with return value set and an EXIT instruction. Fix found bugs: * a program with no instructions was accepted; * a program with no EXIT instruction read outside the buffer. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- app/test/test_bpf.c | 118 +++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 20 +++++-- 3 files changed, 135 insertions(+), 5 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index b7c94ba1c7..6ecc49efff 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -34,6 +34,124 @@ test_bpf(void) #include <rte_ip.h> +/* Tests of most simple BPF programs (no instructions, one instruction etc.) */ + +/* + * Try to load a simple bpf program from the instructions array. + * + * When `expected_errno` is zero, expect it to load successfully. + * When `expected_errno` is non-zero, expect it to fail with this `rte_errno`. + * + * @param nb_ins + * Number of instructions in the `ins` array. + * @param ins + * BPF instructions array. + * @param expected_errno + * Expected result. + * @return + * TEST_SUCCESS on success, error code on failure. + */ +static int +bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, int expected_errno) +{ + const struct rte_bpf_prm prm = { + .ins = ins, + .nb_ins = nb_ins, + .prog_arg = { + .type = RTE_BPF_ARG_RAW, + .size = sizeof(uint64_t), + }, + }; + + struct rte_bpf *const bpf = rte_bpf_load(&prm); + const int actual_errno = rte_errno; + rte_bpf_destroy(bpf); + + if (expected_errno != 0) { + RTE_TEST_ASSERT_EQUAL(bpf, NULL, + "expect rte_bpf_load() == NULL"); + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, + "expect rte_errno == %d, found %d", + expected_errno, actual_errno); + } else + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, + "expect rte_bpf_load() != NULL"); + + return TEST_SUCCESS; +} + +/* + * Try and load completely empty BPF program. + * Should fail because there is no EXIT (and also return value is undefined). + */ +static int +test_no_instructions(void) +{ + static const struct ebpf_insn ins[] = {}; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_instructions_autotest, true, true, test_no_instructions); + +/* + * Try and load a BPF program comprising single EXIT instruction. + * Should fail because the return value is undefined. + */ +static int +test_exit_only(void) +{ + static const struct ebpf_insn ins[] = { + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_exit_only_autotest, true, true, test_exit_only); + +/* + * Try and load a BPF program with no EXIT instruction. + * Should fail because of this. + */ +static int +test_no_exit(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_exit_autotest, true, true, test_no_exit); + +/* + * Try and load smallest possible valid BPF program. + */ +static int +test_minimal_working(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, test_minimal_working); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 556e613762..6983c026af 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) int32_t rc; uint32_t i; - if (prm == NULL || prm->ins == NULL || + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || (prm->nb_xsym != 0 && prm->xsym == NULL)) { rte_errno = EINVAL; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 4f47d6dc7b..23444b3eaa 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) { uint32_t ne; - if (nidx > bvf->prm->nb_ins) { + if (nidx >= bvf->prm->nb_ins) { RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, next pc: %u", __func__, get_node_idx(bvf, node), nidx); @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) * Control Flow Graph (CFG). * Information collected at this path would be used later * to determine is there any loops, and/or unreachable instructions. + * PREREQUISITE: there is at least one node. */ static void dfs(struct bpf_verifier *bvf) { struct inst_node *next, *node; - node = bvf->in; - while (node != NULL) { + RTE_ASSERT(bvf->nb_nodes != 0); + /* + * Since there is at least one node, node with index 0 always exists; + * it is our program entry point. + */ + node = &bvf->in[0]; + do { if (node->colour == WHITE) set_node_colour(bvf, node, GREY); @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) } } else node = NULL; - } + } while (node != NULL); } /* @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) if (rc != 0) return rc; + if (bvf->nb_nodes == 0) { + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", + __func__, bvf); + return -EINVAL; + } + dfs(bvf); RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 3/5] bpf: disallow empty program 2025-12-16 18:20 ` [PATCH v2 3/5] bpf: disallow empty program Marat Khalili @ 2025-12-18 0:54 ` Stephen Hemminger 2025-12-17 8:58 ` Marat Khalili 0 siblings, 1 reply; 17+ messages in thread From: Stephen Hemminger @ 2025-12-18 0:54 UTC (permalink / raw) To: Marat Khalili Cc: Jerin Jacob, Morten Brørup, Konstantin Ananyev, dev, stable [-- Attachment #1: Type: text/plain, Size: 7595 bytes --] I see no need for docbook style comments for static functions. Especially in test only code On Tue, Dec 16, 2025, 10:21 Marat Khalili <marat.khalili@huawei.com> wrote: > Add tests for some simple cases: > * Program with no instructions; > * Program with only EXIT instruction but no return value set; > * Program with return value set but no EXIT instruction; > * Minimal valid program with return value set and an EXIT instruction. > > Fix found bugs: > * a program with no instructions was accepted; > * a program with no EXIT instruction read outside the buffer. > > Signed-off-by: Marat Khalili <marat.khalili@huawei.com> > Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> > --- > app/test/test_bpf.c | 118 +++++++++++++++++++++++++++++++++++++++++ > lib/bpf/bpf_load.c | 2 +- > lib/bpf/bpf_validate.c | 20 +++++-- > 3 files changed, 135 insertions(+), 5 deletions(-) > > diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c > index b7c94ba1c7..6ecc49efff 100644 > --- a/app/test/test_bpf.c > +++ b/app/test/test_bpf.c > @@ -34,6 +34,124 @@ test_bpf(void) > #include <rte_ip.h> > > > +/* Tests of most simple BPF programs (no instructions, one instruction > etc.) */ > + > +/* > + * Try to load a simple bpf program from the instructions array. > + * > + * When `expected_errno` is zero, expect it to load successfully. > + * When `expected_errno` is non-zero, expect it to fail with this > `rte_errno`. > + * > + * @param nb_ins > + * Number of instructions in the `ins` array. > + * @param ins > + * BPF instructions array. > + * @param expected_errno > + * Expected result. > + * @return > + * TEST_SUCCESS on success, error code on failure. > + */ > +static int > +bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, int > expected_errno) > +{ > + const struct rte_bpf_prm prm = { > + .ins = ins, > + .nb_ins = nb_ins, > + .prog_arg = { > + .type = RTE_BPF_ARG_RAW, > + .size = sizeof(uint64_t), > + }, > + }; > + > + struct rte_bpf *const bpf = rte_bpf_load(&prm); > + const int actual_errno = rte_errno; > + rte_bpf_destroy(bpf); > + > + if (expected_errno != 0) { > + RTE_TEST_ASSERT_EQUAL(bpf, NULL, > + "expect rte_bpf_load() == NULL"); > + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, > + "expect rte_errno == %d, found %d", > + expected_errno, actual_errno); > + } else > + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, > + "expect rte_bpf_load() != NULL"); > + > + return TEST_SUCCESS; > +} > + > +/* > + * Try and load completely empty BPF program. > + * Should fail because there is no EXIT (and also return value is > undefined). > + */ > +static int > +test_no_instructions(void) > +{ > + static const struct ebpf_insn ins[] = {}; > + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_no_instructions_autotest, true, true, > test_no_instructions); > + > +/* > + * Try and load a BPF program comprising single EXIT instruction. > + * Should fail because the return value is undefined. > + */ > +static int > +test_exit_only(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + .code = (BPF_JMP | EBPF_EXIT), > + }, > + }; > + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_exit_only_autotest, true, true, test_exit_only); > + > +/* > + * Try and load a BPF program with no EXIT instruction. > + * Should fail because of this. > + */ > +static int > +test_no_exit(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + /* Set return value to the program argument. */ > + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), > + .src_reg = EBPF_REG_1, > + .dst_reg = EBPF_REG_0, > + }, > + }; > + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); > +} > + > +REGISTER_FAST_TEST(bpf_no_exit_autotest, true, true, test_no_exit); > + > +/* > + * Try and load smallest possible valid BPF program. > + */ > +static int > +test_minimal_working(void) > +{ > + static const struct ebpf_insn ins[] = { > + { > + /* Set return value to the program argument. */ > + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), > + .src_reg = EBPF_REG_1, > + .dst_reg = EBPF_REG_0, > + }, > + { > + .code = (BPF_JMP | EBPF_EXIT), > + }, > + }; > + return bpf_load_test(RTE_DIM(ins), ins, 0); > +} > + > +REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, > test_minimal_working); > + > /* > * Basic functional tests for librte_bpf. > * The main procedure - load eBPF program, execute it and > diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c > index 556e613762..6983c026af 100644 > --- a/lib/bpf/bpf_load.c > +++ b/lib/bpf/bpf_load.c > @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) > int32_t rc; > uint32_t i; > > - if (prm == NULL || prm->ins == NULL || > + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || > (prm->nb_xsym != 0 && prm->xsym == NULL)) { > rte_errno = EINVAL; > return NULL; > diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c > index 4f47d6dc7b..23444b3eaa 100644 > --- a/lib/bpf/bpf_validate.c > +++ b/lib/bpf/bpf_validate.c > @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node > *node, uint32_t nidx) > { > uint32_t ne; > > - if (nidx > bvf->prm->nb_ins) { > + if (nidx >= bvf->prm->nb_ins) { > RTE_BPF_LOG_LINE(ERR, > "%s: program boundary violation at pc: %u, next > pc: %u", > __func__, get_node_idx(bvf, node), nidx); > @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct > inst_node *node) > * Control Flow Graph (CFG). > * Information collected at this path would be used later > * to determine is there any loops, and/or unreachable instructions. > + * PREREQUISITE: there is at least one node. > */ > static void > dfs(struct bpf_verifier *bvf) > { > struct inst_node *next, *node; > > - node = bvf->in; > - while (node != NULL) { > + RTE_ASSERT(bvf->nb_nodes != 0); > + /* > + * Since there is at least one node, node with index 0 always > exists; > + * it is our program entry point. > + */ > + node = &bvf->in[0]; > + do { > > if (node->colour == WHITE) > set_node_colour(bvf, node, GREY); > @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) > } > } else > node = NULL; > - } > + } while (node != NULL); > } > > /* > @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) > if (rc != 0) > return rc; > > + if (bvf->nb_nodes == 0) { > + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", > + __func__, bvf); > + return -EINVAL; > + } > + > dfs(bvf); > > RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" > -- > 2.43.0 > > [-- Attachment #2: Type: text/html, Size: 9549 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v2 3/5] bpf: disallow empty program 2025-12-18 0:54 ` Stephen Hemminger @ 2025-12-17 8:58 ` Marat Khalili 0 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-17 8:58 UTC (permalink / raw) To: Stephen Hemminger Cc: Jerin Jacob, Morten Brørup, Konstantin Ananyev, dev, stable [-- Attachment #1: Type: text/plain, Size: 160 bytes --] > I see no need for docbook style comments for static functions. > Especially in test only code Thank you, will remember it and remove and next iteration. [-- Attachment #2: Type: text/html, Size: 1867 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 4/5] bpf: make add/subtract one program validate 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili ` (2 preceding siblings ...) 2025-12-16 18:20 ` [PATCH v2 3/5] bpf: disallow empty program Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-16 18:20 ` [PATCH v2 5/5] bpf: fix BPF validation w/ conditional jump first Marat Khalili 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 5 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev; +Cc: dev, stable Add tests loading simple BPF programs adding or subtracting one to its argument and fix triggered signed integer overflow undefined behaviours: lib/bpf/bpf_validate.c:324:24: runtime error: signed integer overflow: 1 + 9223372036854775807 cannot be represented in type 'long int' lib/bpf/bpf_validate.c:352:24: runtime error: signed integer overflow: -9223372036854775808 - 1 cannot be represented in type 'long int' As a minimal possible fix perform operation on unsigned integers where overflow is well-defined, which was probably the original intent. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- app/test/test_bpf.c | 56 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 8 +++--- 2 files changed, 60 insertions(+), 4 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index 6ecc49efff..c25f5a6f4d 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -152,6 +152,62 @@ test_minimal_working(void) REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, test_minimal_working); +/* + * Try and load valid BPF program adding one to the argument. + */ +static int +test_add_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to one. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_K), + .dst_reg = EBPF_REG_0, + .imm = 1, + }, + { + /* Add program argument to the return value. */ + .code = (EBPF_ALU64 | BPF_ADD | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_add_one_autotest, true, true, test_add_one); + +/* + * Try and load valid BPF program subtracting one from the argument. + */ +static int +test_subtract_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Subtract one from the program argument. */ + .code = (EBPF_ALU64 | BPF_SUB | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 1, + }, + { + /* Set return value to the result. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_subtract_one_autotest, true, true, test_subtract_one); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 23444b3eaa..47ad6fef0f 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -243,8 +243,8 @@ eval_add(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min + rs->u.min) & msk; rv.u.max = (rd->u.max + rs->u.max) & msk; - rv.s.min = (rd->s.min + rs->s.min) & msk; - rv.s.max = (rd->s.max + rs->s.max) & msk; + rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; + rv.s.max = ((uint64_t)rd->s.max + (uint64_t)rs->s.max) & msk; /* * if at least one of the operands is not constant, @@ -272,8 +272,8 @@ eval_sub(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min - rs->u.max) & msk; rv.u.max = (rd->u.max - rs->u.min) & msk; - rv.s.min = (rd->s.min - rs->s.max) & msk; - rv.s.max = (rd->s.max - rs->s.min) & msk; + rv.s.min = ((uint64_t)rd->s.min - (uint64_t)rs->s.max) & msk; + rv.s.max = ((uint64_t)rd->s.max - (uint64_t)rs->s.min) & msk; /* * if at least one of the operands is not constant, -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 5/5] bpf: fix BPF validation w/ conditional jump first 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili ` (3 preceding siblings ...) 2025-12-16 18:20 ` [PATCH v2 4/5] bpf: make add/subtract one program validate Marat Khalili @ 2025-12-16 18:20 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 5 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-16 18:20 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev, Ferruh Yigit; +Cc: dev, stable When the BPF program was starting with a conditional jump only one (true) execution branch of the program was evaluated. Any instructions jumped over were not evaluated and could contain invalid operations. The root cause was using zero instruction index as a signal for ending evaluation when backtracking. Switch from using previous instruction index for tracking execution history to a previous instruction pointer. First instruction will not have it set, and therefore backtracking _from_ it will end evaluation, not backtracking _to_ it like before. Add two tests demostrating the problem: * test_jump_over_invalid_first: loads BPF program with conditional jump over the invalid operation, should not succeeed; * test_jump_over_invalid_non_first: same program with one extra instruction at the start to demostrate that it is indeed invalid (and also guard against another kind of regression); Fixes: 6e12ec4c4d ("bpf: add more checks") Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- app/test/test_bpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 20 ++++------- 2 files changed, 86 insertions(+), 14 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index c25f5a6f4d..3f04cb28be 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -208,6 +208,86 @@ test_subtract_one(void) REGISTER_FAST_TEST(bpf_subtract_one_autotest, true, true, test_subtract_one); +/* + * Conditionally jump over invalid operation as first instruction. + */ +static int +test_jump_over_invalid_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_first_autotest, true, true, + test_jump_over_invalid_first); + +/* + * Conditionally jump over invalid operation as non-first instruction. + */ +static int +test_jump_over_invalid_non_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_non_first_autotest, true, true, + test_jump_over_invalid_non_first); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 47ad6fef0f..64a8f227a3 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -64,7 +64,7 @@ struct inst_node { uint8_t cur_edge:4; uint8_t edge_type[MAX_EDGES]; uint32_t edge_dest[MAX_EDGES]; - uint32_t prev_node; + struct inst_node *prev_node; struct { struct bpf_eval_state *cur; /* save/restore for jcc targets */ struct bpf_eval_state *start; @@ -1875,12 +1875,6 @@ set_edge_type(struct bpf_verifier *bvf, struct inst_node *node, bvf->edge_type[type]++; } -static struct inst_node * -get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) -{ - return bvf->in + node->prev_node; -} - /* * Depth-First Search (DFS) through previously constructed * Control Flow Graph (CFG). @@ -1916,7 +1910,7 @@ dfs(struct bpf_verifier *bvf) if (next != NULL) { /* proceed with next child */ - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } else { /* @@ -1925,7 +1919,7 @@ dfs(struct bpf_verifier *bvf) */ set_node_colour(bvf, node, BLACK); node->cur_edge = 0; - node = get_prev_node(bvf, node); + node = node->prev_node; } } else node = NULL; @@ -2490,7 +2484,7 @@ evaluate(struct bpf_verifier *bvf) next = NULL; stats.nb_prune++; } else { - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } } else { @@ -2501,11 +2495,9 @@ evaluate(struct bpf_verifier *bvf) */ node->cur_edge = 0; save_safe_eval_state(bvf, node); - node = get_prev_node(bvf, node); + node = node->prev_node; - /* finished */ - if (node == bvf->in) - node = NULL; + /* first node will not have prev, signalling finish */ } } -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 0/6] bpf: simple tests and fixes 2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili ` (4 preceding siblings ...) 2025-12-16 18:20 ` [PATCH v2 5/5] bpf: fix BPF validation w/ conditional jump first Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili ` (5 more replies) 5 siblings, 6 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb; +Cc: dev, stable, konstantin.ananyev Add simple unit-style tests for rte_bpf_load, and fix some minor discovered bugs. v3: * Fixed unfortunate typos in the last patch commit message. * Marked all ARM opcodes with UINT32_C for clarity as suggested by Morten Brørup, in a separate commit. v2: * Moved new tests back into test_bpf.c * Changed library macros RTE_SHIFT_VALxx to handle variable first arguments instead of introducing new macros. * Added another test and fix, for conditional jump as first instruction. Marat Khalili (6): eal: variable first arguments of RTE_SHIFT_VALxx bpf: fix signed shift overflows in ARM JIT bpf: mark ARM opcodes with UINT32_C bpf: disallow empty program bpf: make add/subtract one program validate bpf: fix BPF validation w/ conditional jump first app/test/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++ lib/bpf/bpf_jit_arm64.c | 184 ++++++++++++------------- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 48 ++++--- lib/eal/include/rte_bitops.h | 4 +- 5 files changed, 375 insertions(+), 117 deletions(-) -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili ` (4 subsequent siblings) 5 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Jack Bond-Preston Cc: dev, stable, konstantin.ananyev Macros RTE_SHIFT_VAL32 and RTE_SHIFT_VAL64 were implemented by applying UINT32_C or UINT64_C correspondingly to its first argument. As a consequence first argument had to be a constant. Replace UINT32_C and UINT64_C with casts to uint32_t and uint64_t to allow these arguments be variable. For constants the result should be the same. (Yes, technically UINT64_C promotes to uint_least64_t, not uint64_t, but I think most users of RTE_SHIFT_VAL64 expect the result to be uint64_t.) Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Reviewed-by: Morten Brørup <mb@smartsharesystems.com> --- lib/eal/include/rte_bitops.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h index 2d1b9d281c..aa6ac73abb 100644 --- a/lib/eal/include/rte_bitops.h +++ b/lib/eal/include/rte_bitops.h @@ -51,7 +51,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (32 - width of val). */ -#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr)) +#define RTE_SHIFT_VAL32(val, nr) ((uint32_t)(val) << (nr)) /** * Get the uint64_t shifted value. @@ -61,7 +61,7 @@ extern "C" { * @param nr * The shift number in range of 0 to (64 - width of val). */ -#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr)) +#define RTE_SHIFT_VAL64(val, nr) ((uint64_t)(val) << (nr)) /** * Generate a contiguous 32-bit mask -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili ` (3 subsequent siblings) 5 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Wathsala Vithanage, Konstantin Ananyev Cc: dev, stable Left shifts of integer literals and bool values overwriting the sign bit were used multiple times in bpf_jit_arm64.c. E.g.: insn = (!!is64) << 31; where is64 has type bool (double bang is a no-op here). The operand of left shift was promoted to type int, which when 32-bit wide cannot represent the result. Similarly literal integers have int type by default. Sanitizer produced the following diagnostic during runtime (for various lines): lib/bpf/bpf_jit_arm64.c:241:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' To fix the issue use RTE_BIT32 and similar macros instead. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> --- lib/bpf/bpf_jit_arm64.c | 162 ++++++++++++++++++++-------------------- 1 file changed, 81 insertions(+), 81 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 96b8cd2e03..76df1e4ba1 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -28,7 +28,7 @@ #define A64_ZR 31 #define check_imm(n, val) (((val) >= 0) ? !!((val) >> (n)) : !!((~val) >> (n))) -#define mask_imm(n, val) ((val) & ((1 << (n)) - 1)) +#define mask_imm(n, val) ((val) & (RTE_BIT32(n) - 1)) struct ebpf_a64_map { uint32_t off; /* eBPF to arm64 insn offset mapping for jump */ @@ -238,12 +238,12 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, uint32_t insn, imm; imm = mask_imm(12, imm12); - insn = (!!is64) << 31; - insn |= (!!sub) << 30; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(sub, 30); insn |= 0x11000000; insn |= rd; - insn |= rn << 5; - insn |= imm << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(imm, 10); emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_imm(12, imm12)); @@ -279,16 +279,16 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, { uint32_t insn; - insn = (!!load) << 22; - insn |= (!!pre_index) << 24; + insn = RTE_SHIFT_VAL32(load, 22); + insn |= RTE_SHIFT_VAL32(pre_index, 24); insn |= 0xa8800000; insn |= rt; - insn |= rn << 5; - insn |= rt2 << 10; + insn |= RTE_SHIFT_VAL32(rn, 5); + insn |= RTE_SHIFT_VAL32(rt2, 10); if (push) - insn |= 0x7e << 15; /* 0x7e means -2 with imm7 */ + insn |= RTE_SHIFT_VAL32(0x7e, 15); /* 0x7e means -2 with imm7 */ else - insn |= 0x2 << 15; + insn |= RTE_SHIFT_VAL32(0x2, 15); emit_insn(ctx, insn, check_reg(rn) || check_reg(rt) || check_reg(rt2)); @@ -317,11 +317,11 @@ mov_imm(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t type, { uint32_t insn; - insn = (!!is64) << 31; - insn |= type << 29; - insn |= 0x25 << 23; - insn |= (shift/16) << 21; - insn |= imm16 << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(type, 29); + insn |= RTE_SHIFT_VAL32(0x25, 23); + insn |= RTE_SHIFT_VAL32(shift/16, 21); + insn |= RTE_SHIFT_VAL32(imm16, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_mov_hw(is64, shift)); @@ -334,7 +334,7 @@ emit_mov_imm32(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint32_t val) uint16_t lower = val & 0xffff; /* Positive number */ - if ((val & 1UL << 31) == 0) { + if ((val & RTE_BIT32(31)) == 0) { mov_imm(ctx, is64, rd, A64_MOVZ, lower, 0); if (upper) mov_imm(ctx, is64, rd, A64_MOVK, upper, 16); @@ -393,21 +393,21 @@ emit_ls(struct a64_jit_ctx *ctx, uint8_t sz, uint8_t rt, uint8_t rn, uint8_t rm, { uint32_t insn; - insn = 0x1c1 << 21; + insn = RTE_SHIFT_VAL32(0x1c1, 21); if (load) - insn |= 1 << 22; + insn |= RTE_BIT32(22); if (sz == BPF_B) - insn |= 0 << 30; + insn |= RTE_SHIFT_VAL32(0, 30); else if (sz == BPF_H) - insn |= 1 << 30; + insn |= RTE_SHIFT_VAL32(1, 30); else if (sz == BPF_W) - insn |= 2 << 30; + insn |= RTE_SHIFT_VAL32(2, 30); else if (sz == EBPF_DW) - insn |= 3 << 30; + insn |= RTE_SHIFT_VAL32(3, 30); - insn |= rm << 16; - insn |= 0x1a << 10; /* LSL and S = 0 */ - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1a, 10); /* LSL and S = 0 */ + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn) || check_reg(rm) || @@ -436,10 +436,10 @@ emit_add_sub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 21; /* shift == 0 */ - insn |= rm << 16; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 21); /* shift == 0 */ + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -468,11 +468,11 @@ emit_mul(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rm) { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= A64_ZR << 10; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(A64_ZR, 10); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -489,11 +489,11 @@ emit_data_process_two_src(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd6 << 21; - insn |= rm << 16; - insn |= op << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd6, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(op, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -532,14 +532,14 @@ emit_bitfield(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; + insn = RTE_SHIFT_VAL32(is64, 31); if (insn) - insn |= 1 << 22; /* Set N bit when is64 is set */ - insn |= op << 29; - insn |= 0x26 << 23; - insn |= immr << 16; - insn |= imms << 10; - insn |= rn << 5; + insn |= RTE_BIT32(22); /* Set N bit when is64 is set */ + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x26, 23); + insn |= RTE_SHIFT_VAL32(immr, 16); + insn |= RTE_SHIFT_VAL32(imms, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || @@ -578,11 +578,11 @@ emit_logical(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, { uint32_t insn; - insn = (!!is64) << 31; - insn |= op << 29; - insn |= 0x50 << 21; - insn |= rm << 16; - insn |= rd << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(op, 29); + insn |= RTE_SHIFT_VAL32(0x50, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rm)); @@ -612,12 +612,12 @@ emit_msub(struct a64_jit_ctx *ctx, bool is64, uint8_t rd, uint8_t rn, { uint32_t insn; - insn = (!!is64) << 31; - insn |= 0xd8 << 21; - insn |= rm << 16; - insn |= 0x1 << 15; - insn |= ra << 10; - insn |= rn << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0xd8, 21); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(0x1, 15); + insn |= RTE_SHIFT_VAL32(ra, 10); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rd; emit_insn(ctx, insn, check_reg(rd) || check_reg(rn) || check_reg(rm) || @@ -638,7 +638,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) uint32_t insn; insn = 0xd63f0000; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); } @@ -669,22 +669,22 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) uint32_t insn; insn = 0xdac00000; - insn |= rd << 5; + insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; switch (imm) { case 16: - insn |= 1 << 10; + insn |= RTE_SHIFT_VAL32(1, 10); emit_insn(ctx, insn, check_reg(rd)); emit_zero_extend(ctx, rd, 16); break; case 32: - insn |= 2 << 10; + insn |= RTE_SHIFT_VAL32(2, 10); emit_insn(ctx, insn, check_reg(rd)); /* Upper 32 bits already cleared */ break; case 64: - insn |= 3 << 10; + insn |= RTE_SHIFT_VAL32(3, 10); emit_insn(ctx, insn, check_reg(rd)); break; default: @@ -933,9 +933,9 @@ emit_cbnz(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = (!!is64) << 31; - insn |= 0x35 << 24; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(0x35, 24); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_imm(19, imm19)); @@ -947,7 +947,7 @@ emit_b(struct a64_jit_ctx *ctx, int32_t imm26) uint32_t insn, imm; imm = mask_imm(26, imm26); - insn = 0x5 << 26; + insn = RTE_SHIFT_VAL32(0x5, 26); insn |= imm; emit_insn(ctx, insn, check_imm(26, imm26)); @@ -971,9 +971,9 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) uint32_t insn; insn = 0xb820001f; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rs) || check_reg(rn)); } @@ -984,8 +984,8 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) uint32_t insn; insn = 0x885f7c00; - insn |= (!!is64) << 30; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rt) || check_reg(rn)); @@ -998,9 +998,9 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, uint32_t insn; insn = 0x88007c00; - insn |= (!!is64) << 30; - insn |= rs << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 30); + insn |= RTE_SHIFT_VAL32(rs, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; emit_insn(ctx, insn, check_reg(rs) || check_reg(rt) || check_reg(rn)); @@ -1051,9 +1051,9 @@ emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t insn; insn = opc; - insn |= (!!is64) << 31; - insn |= rm << 16; - insn |= rn << 5; + insn |= RTE_SHIFT_VAL32(is64, 31); + insn |= RTE_SHIFT_VAL32(rm, 16); + insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn) || check_reg(rm)); } @@ -1076,8 +1076,8 @@ emit_b_cond(struct a64_jit_ctx *ctx, uint8_t cond, int32_t imm19) uint32_t insn, imm; imm = mask_imm(19, imm19); - insn = 0x15 << 26; - insn |= imm << 5; + insn = RTE_SHIFT_VAL32(0x15, 26); + insn |= RTE_SHIFT_VAL32(imm, 5); insn |= cond; emit_insn(ctx, insn, check_cond(cond) || check_imm(19, imm19)); @@ -1301,7 +1301,7 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) break; /* dst = imm64 */ case (BPF_LD | BPF_IMM | EBPF_DW): - u64 = ((uint64_t)ins[1].imm << 32) | (uint32_t)imm; + u64 = RTE_SHIFT_VAL64(ins[1].imm, 32) | (uint32_t)imm; emit_mov_imm(ctx, 1, dst, u64); i++; break; -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili 2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili 2025-12-17 18:01 ` [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 4/6] bpf: disallow empty program Marat Khalili ` (2 subsequent siblings) 5 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Wathsala Vithanage, Konstantin Ananyev Cc: dev, stable To prevent confusion as to whether operations with them will be executed as signed or unsigned (and with what bit width) mark all ARM opcode constants with UINT32_C. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- lib/bpf/bpf_jit_arm64.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index 76df1e4ba1..9263234338 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -12,7 +12,7 @@ #include "bpf_impl.h" #define A64_REG_MASK(r) ((r) & 0x1f) -#define A64_INVALID_OP_CODE (0xffffffff) +#define A64_INVALID_OP_CODE (UINT32_C(0xffffffff)) #define TMP_REG_1 (EBPF_REG_10 + 1) #define TMP_REG_2 (EBPF_REG_10 + 2) @@ -228,7 +228,7 @@ emit_insn(struct a64_jit_ctx *ctx, uint32_t insn, int error) static void emit_ret(struct a64_jit_ctx *ctx) { - emit_insn(ctx, 0xd65f03c0, 0); + emit_insn(ctx, UINT32_C(0xd65f03c0), 0); } static void @@ -240,7 +240,7 @@ emit_add_sub_imm(struct a64_jit_ctx *ctx, bool is64, bool sub, uint8_t rd, imm = mask_imm(12, imm12); insn = RTE_SHIFT_VAL32(is64, 31); insn |= RTE_SHIFT_VAL32(sub, 30); - insn |= 0x11000000; + insn |= UINT32_C(0x11000000); insn |= rd; insn |= RTE_SHIFT_VAL32(rn, 5); insn |= RTE_SHIFT_VAL32(imm, 10); @@ -281,7 +281,7 @@ emit_ls_pair_64(struct a64_jit_ctx *ctx, uint8_t rt, uint8_t rt2, uint8_t rn, insn = RTE_SHIFT_VAL32(load, 22); insn |= RTE_SHIFT_VAL32(pre_index, 24); - insn |= 0xa8800000; + insn |= UINT32_C(0xa8800000); insn |= rt; insn |= RTE_SHIFT_VAL32(rn, 5); insn |= RTE_SHIFT_VAL32(rt2, 10); @@ -637,7 +637,7 @@ emit_blr(struct a64_jit_ctx *ctx, uint8_t rn) { uint32_t insn; - insn = 0xd63f0000; + insn = UINT32_C(0xd63f0000); insn |= RTE_SHIFT_VAL32(rn, 5); emit_insn(ctx, insn, check_reg(rn)); @@ -668,7 +668,7 @@ emit_rev(struct a64_jit_ctx *ctx, uint8_t rd, int32_t imm) { uint32_t insn; - insn = 0xdac00000; + insn = UINT32_C(0xdac00000); insn |= RTE_SHIFT_VAL32(rd, 5); insn |= rd; @@ -970,7 +970,7 @@ emit_stadd(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rn) { uint32_t insn; - insn = 0xb820001f; + insn = UINT32_C(0xb820001f); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rs, 16); insn |= RTE_SHIFT_VAL32(rn, 5); @@ -983,7 +983,7 @@ emit_ldxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rt, uint8_t rn) { uint32_t insn; - insn = 0x885f7c00; + insn = UINT32_C(0x885f7c00); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rn, 5); insn |= rt; @@ -997,7 +997,7 @@ emit_stxr(struct a64_jit_ctx *ctx, bool is64, uint8_t rs, uint8_t rt, { uint32_t insn; - insn = 0x88007c00; + insn = UINT32_C(0x88007c00); insn |= RTE_SHIFT_VAL32(is64, 30); insn |= RTE_SHIFT_VAL32(rs, 16); insn |= RTE_SHIFT_VAL32(rn, 5); @@ -1042,8 +1042,8 @@ emit_xadd(struct a64_jit_ctx *ctx, uint8_t op, uint8_t tmp1, uint8_t tmp2, } } -#define A64_CMP 0x6b00000f -#define A64_TST 0x6a00000f +#define A64_CMP (UINT32_C(0x6b00000f)) +#define A64_TST (UINT32_C(0x6a00000f)) static void emit_cmp_tst(struct a64_jit_ctx *ctx, bool is64, uint8_t rn, uint8_t rm, uint32_t opc) -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 4/6] bpf: disallow empty program 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (2 preceding siblings ...) 2025-12-17 18:01 ` [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 5/6] bpf: make add/subtract one program validate Marat Khalili 2025-12-17 18:01 ` [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first Marat Khalili 5 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev; +Cc: dev, stable Add tests for some simple cases: * Program with no instructions; * Program with only EXIT instruction but no return value set; * Program with return value set but no EXIT instruction; * Minimal valid program with return value set and an EXIT instruction. Fix found bugs: * a program with no instructions was accepted; * a program with no EXIT instruction read outside the buffer. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- app/test/test_bpf.c | 118 +++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_validate.c | 20 +++++-- 3 files changed, 135 insertions(+), 5 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index b7c94ba1c7..6ecc49efff 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -34,6 +34,124 @@ test_bpf(void) #include <rte_ip.h> +/* Tests of most simple BPF programs (no instructions, one instruction etc.) */ + +/* + * Try to load a simple bpf program from the instructions array. + * + * When `expected_errno` is zero, expect it to load successfully. + * When `expected_errno` is non-zero, expect it to fail with this `rte_errno`. + * + * @param nb_ins + * Number of instructions in the `ins` array. + * @param ins + * BPF instructions array. + * @param expected_errno + * Expected result. + * @return + * TEST_SUCCESS on success, error code on failure. + */ +static int +bpf_load_test(uint32_t nb_ins, const struct ebpf_insn *ins, int expected_errno) +{ + const struct rte_bpf_prm prm = { + .ins = ins, + .nb_ins = nb_ins, + .prog_arg = { + .type = RTE_BPF_ARG_RAW, + .size = sizeof(uint64_t), + }, + }; + + struct rte_bpf *const bpf = rte_bpf_load(&prm); + const int actual_errno = rte_errno; + rte_bpf_destroy(bpf); + + if (expected_errno != 0) { + RTE_TEST_ASSERT_EQUAL(bpf, NULL, + "expect rte_bpf_load() == NULL"); + RTE_TEST_ASSERT_EQUAL(actual_errno, expected_errno, + "expect rte_errno == %d, found %d", + expected_errno, actual_errno); + } else + RTE_TEST_ASSERT_NOT_EQUAL(bpf, NULL, + "expect rte_bpf_load() != NULL"); + + return TEST_SUCCESS; +} + +/* + * Try and load completely empty BPF program. + * Should fail because there is no EXIT (and also return value is undefined). + */ +static int +test_no_instructions(void) +{ + static const struct ebpf_insn ins[] = {}; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_instructions_autotest, true, true, test_no_instructions); + +/* + * Try and load a BPF program comprising single EXIT instruction. + * Should fail because the return value is undefined. + */ +static int +test_exit_only(void) +{ + static const struct ebpf_insn ins[] = { + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_exit_only_autotest, true, true, test_exit_only); + +/* + * Try and load a BPF program with no EXIT instruction. + * Should fail because of this. + */ +static int +test_no_exit(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_no_exit_autotest, true, true, test_no_exit); + +/* + * Try and load smallest possible valid BPF program. + */ +static int +test_minimal_working(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, test_minimal_working); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 556e613762..6983c026af 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -88,7 +88,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) int32_t rc; uint32_t i; - if (prm == NULL || prm->ins == NULL || + if (prm == NULL || prm->ins == NULL || prm->nb_ins == 0 || (prm->nb_xsym != 0 && prm->xsym == NULL)) { rte_errno = EINVAL; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 4f47d6dc7b..23444b3eaa 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1827,7 +1827,7 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) { uint32_t ne; - if (nidx > bvf->prm->nb_ins) { + if (nidx >= bvf->prm->nb_ins) { RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, next pc: %u", __func__, get_node_idx(bvf, node), nidx); @@ -1886,14 +1886,20 @@ get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) * Control Flow Graph (CFG). * Information collected at this path would be used later * to determine is there any loops, and/or unreachable instructions. + * PREREQUISITE: there is at least one node. */ static void dfs(struct bpf_verifier *bvf) { struct inst_node *next, *node; - node = bvf->in; - while (node != NULL) { + RTE_ASSERT(bvf->nb_nodes != 0); + /* + * Since there is at least one node, node with index 0 always exists; + * it is our program entry point. + */ + node = &bvf->in[0]; + do { if (node->colour == WHITE) set_node_colour(bvf, node, GREY); @@ -1923,7 +1929,7 @@ dfs(struct bpf_verifier *bvf) } } else node = NULL; - } + } while (node != NULL); } /* @@ -2062,6 +2068,12 @@ validate(struct bpf_verifier *bvf) if (rc != 0) return rc; + if (bvf->nb_nodes == 0) { + RTE_BPF_LOG_LINE(ERR, "%s(%p) the program is empty", + __func__, bvf); + return -EINVAL; + } + dfs(bvf); RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 5/6] bpf: make add/subtract one program validate 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (3 preceding siblings ...) 2025-12-17 18:01 ` [PATCH v3 4/6] bpf: disallow empty program Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 2025-12-17 18:01 ` [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first Marat Khalili 5 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev; +Cc: dev, stable Add tests loading simple BPF programs adding or subtracting one to its argument and fix triggered signed integer overflow undefined behaviours: lib/bpf/bpf_validate.c:324:24: runtime error: signed integer overflow: 1 + 9223372036854775807 cannot be represented in type 'long int' lib/bpf/bpf_validate.c:352:24: runtime error: signed integer overflow: -9223372036854775808 - 1 cannot be represented in type 'long int' As a minimal possible fix perform operation on unsigned integers where overflow is well-defined, which was probably the original intent. Signed-off-by: Marat Khalili <marat.khalili@huawei.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- app/test/test_bpf.c | 56 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 8 +++--- 2 files changed, 60 insertions(+), 4 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index 6ecc49efff..c25f5a6f4d 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -152,6 +152,62 @@ test_minimal_working(void) REGISTER_FAST_TEST(bpf_minimal_working_autotest, true, true, test_minimal_working); +/* + * Try and load valid BPF program adding one to the argument. + */ +static int +test_add_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to one. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_K), + .dst_reg = EBPF_REG_0, + .imm = 1, + }, + { + /* Add program argument to the return value. */ + .code = (EBPF_ALU64 | BPF_ADD | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_add_one_autotest, true, true, test_add_one); + +/* + * Try and load valid BPF program subtracting one from the argument. + */ +static int +test_subtract_one(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Subtract one from the program argument. */ + .code = (EBPF_ALU64 | BPF_SUB | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 1, + }, + { + /* Set return value to the result. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, 0); +} + +REGISTER_FAST_TEST(bpf_subtract_one_autotest, true, true, test_subtract_one); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 23444b3eaa..47ad6fef0f 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -243,8 +243,8 @@ eval_add(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min + rs->u.min) & msk; rv.u.max = (rd->u.max + rs->u.max) & msk; - rv.s.min = (rd->s.min + rs->s.min) & msk; - rv.s.max = (rd->s.max + rs->s.max) & msk; + rv.s.min = ((uint64_t)rd->s.min + (uint64_t)rs->s.min) & msk; + rv.s.max = ((uint64_t)rd->s.max + (uint64_t)rs->s.max) & msk; /* * if at least one of the operands is not constant, @@ -272,8 +272,8 @@ eval_sub(struct bpf_reg_val *rd, const struct bpf_reg_val *rs, uint64_t msk) rv.u.min = (rd->u.min - rs->u.max) & msk; rv.u.max = (rd->u.max - rs->u.min) & msk; - rv.s.min = (rd->s.min - rs->s.max) & msk; - rv.s.max = (rd->s.max - rs->s.min) & msk; + rv.s.min = ((uint64_t)rd->s.min - (uint64_t)rs->s.max) & msk; + rv.s.max = ((uint64_t)rd->s.max - (uint64_t)rs->s.min) & msk; /* * if at least one of the operands is not constant, -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first 2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili ` (4 preceding siblings ...) 2025-12-17 18:01 ` [PATCH v3 5/6] bpf: make add/subtract one program validate Marat Khalili @ 2025-12-17 18:01 ` Marat Khalili 5 siblings, 0 replies; 17+ messages in thread From: Marat Khalili @ 2025-12-17 18:01 UTC (permalink / raw) To: stephen, jerinjacobk, mb, Konstantin Ananyev, Ferruh Yigit; +Cc: dev, stable When the BPF program was starting with a conditional jump only one (true) execution branch of the program was evaluated. Any instructions jumped over were not evaluated and could contain invalid operations. The root cause was using zero instruction index as a signal for ending evaluation when backtracking. Switch from using previous instruction index for tracking execution history to a previous instruction pointer. First instruction will not have it set, and therefore backtracking _from_ it will end evaluation, not backtracking _to_ it like before. Add two tests demonstrating the problem: * test_jump_over_invalid_first: loads BPF program with conditional jump over the invalid operation, should not succeed; * test_jump_over_invalid_non_first: same program with one extra instruction at the start to demonstrate that it is indeed invalid (and also guard against another kind of regression); Fixes: 6e12ec4c4d6d ("bpf: add more checks") Signed-off-by: Marat Khalili <marat.khalili@huawei.com> --- app/test/test_bpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++ lib/bpf/bpf_validate.c | 20 ++++------- 2 files changed, 86 insertions(+), 14 deletions(-) diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index c25f5a6f4d..3f04cb28be 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -208,6 +208,86 @@ test_subtract_one(void) REGISTER_FAST_TEST(bpf_subtract_one_autotest, true, true, test_subtract_one); +/* + * Conditionally jump over invalid operation as first instruction. + */ +static int +test_jump_over_invalid_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_first_autotest, true, true, + test_jump_over_invalid_first); + +/* + * Conditionally jump over invalid operation as non-first instruction. + */ +static int +test_jump_over_invalid_non_first(void) +{ + static const struct ebpf_insn ins[] = { + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + /* Jump over the next instruction for some r1. */ + .code = (BPF_JMP | BPF_JEQ | BPF_K), + .dst_reg = EBPF_REG_1, + .imm = 42, + .off = 1, + }, + { + /* Write 0xDEADBEEF to [r1 + INT16_MIN]. */ + .code = (BPF_ST | BPF_MEM | EBPF_DW), + .dst_reg = EBPF_REG_1, + .off = INT16_MIN, + .imm = 0xDEADBEEF, + }, + { + /* Set return value to the program argument. */ + .code = (EBPF_ALU64 | EBPF_MOV | BPF_X), + .src_reg = EBPF_REG_1, + .dst_reg = EBPF_REG_0, + }, + { + .code = (BPF_JMP | EBPF_EXIT), + }, + }; + return bpf_load_test(RTE_DIM(ins), ins, EINVAL); +} + +REGISTER_FAST_TEST(bpf_jump_over_invalid_non_first_autotest, true, true, + test_jump_over_invalid_non_first); + /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 47ad6fef0f..64a8f227a3 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -64,7 +64,7 @@ struct inst_node { uint8_t cur_edge:4; uint8_t edge_type[MAX_EDGES]; uint32_t edge_dest[MAX_EDGES]; - uint32_t prev_node; + struct inst_node *prev_node; struct { struct bpf_eval_state *cur; /* save/restore for jcc targets */ struct bpf_eval_state *start; @@ -1875,12 +1875,6 @@ set_edge_type(struct bpf_verifier *bvf, struct inst_node *node, bvf->edge_type[type]++; } -static struct inst_node * -get_prev_node(struct bpf_verifier *bvf, struct inst_node *node) -{ - return bvf->in + node->prev_node; -} - /* * Depth-First Search (DFS) through previously constructed * Control Flow Graph (CFG). @@ -1916,7 +1910,7 @@ dfs(struct bpf_verifier *bvf) if (next != NULL) { /* proceed with next child */ - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } else { /* @@ -1925,7 +1919,7 @@ dfs(struct bpf_verifier *bvf) */ set_node_colour(bvf, node, BLACK); node->cur_edge = 0; - node = get_prev_node(bvf, node); + node = node->prev_node; } } else node = NULL; @@ -2490,7 +2484,7 @@ evaluate(struct bpf_verifier *bvf) next = NULL; stats.nb_prune++; } else { - next->prev_node = get_node_idx(bvf, node); + next->prev_node = node; node = next; } } else { @@ -2501,11 +2495,9 @@ evaluate(struct bpf_verifier *bvf) */ node->cur_edge = 0; save_safe_eval_state(bvf, node); - node = get_prev_node(bvf, node); + node = node->prev_node; - /* finished */ - if (node == bvf->in) - node = NULL; + /* first node will not have prev, signalling finish */ } } -- 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-12-17 18:03 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20251110153046.63518-1-marat.khalili@huawei.com>
2025-12-16 18:20 ` [PATCH v2 0/5] bpf: simple tests and fixes Marat Khalili
2025-12-16 18:20 ` [PATCH v2 1/5] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili
2025-12-17 9:25 ` Morten Brørup
2025-12-16 18:20 ` [PATCH v2 2/5] bpf: fix signed shift overflows in ARM JIT Marat Khalili
2025-12-17 9:49 ` Morten Brørup
2025-12-16 18:20 ` [PATCH v2 3/5] bpf: disallow empty program Marat Khalili
2025-12-18 0:54 ` Stephen Hemminger
2025-12-17 8:58 ` Marat Khalili
2025-12-16 18:20 ` [PATCH v2 4/5] bpf: make add/subtract one program validate Marat Khalili
2025-12-16 18:20 ` [PATCH v2 5/5] bpf: fix BPF validation w/ conditional jump first Marat Khalili
2025-12-17 18:01 ` [PATCH v3 0/6] bpf: simple tests and fixes Marat Khalili
2025-12-17 18:01 ` [PATCH v3 1/6] eal: variable first arguments of RTE_SHIFT_VALxx Marat Khalili
2025-12-17 18:01 ` [PATCH v3 2/6] bpf: fix signed shift overflows in ARM JIT Marat Khalili
2025-12-17 18:01 ` [PATCH v3 3/6] bpf: mark ARM opcodes with UINT32_C Marat Khalili
2025-12-17 18:01 ` [PATCH v3 4/6] bpf: disallow empty program Marat Khalili
2025-12-17 18:01 ` [PATCH v3 5/6] bpf: make add/subtract one program validate Marat Khalili
2025-12-17 18:01 ` [PATCH v3 6/6] bpf: fix BPF validation w/ conditional jump first Marat Khalili
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).