From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2401C45804 for ; Fri, 23 Aug 2024 18:21:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1E2B7402BE; Fri, 23 Aug 2024 18:21:51 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 271A9402BE for ; Fri, 23 Aug 2024 18:21:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724430108; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=28a5+id1Lw1TI7yMgrH6MeFhGABFp8YiQJCFbyegwY8=; b=byH27h6DyA2WjrFel5qT4qTrHoAOHOH1O7u7s4uM1I2bpejAEG1rG1s1I5orR4Ncfa5ea9 C+qiNKayObXc4oPMUYgVxGPCwSbKWtSzhfLFVSzEQ5a437jj4/OX4u17ZeLjNQrjiYQy/Z rUKvXyN0xP/KDPUe4uH1i+UnHOw7tHY= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-119-D0jxvVN4N3GcQXvR-yXCXA-1; Fri, 23 Aug 2024 12:21:45 -0400 X-MC-Unique: D0jxvVN4N3GcQXvR-yXCXA-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 133401956080; Fri, 23 Aug 2024 16:21:44 +0000 (UTC) Received: from rh.redhat.com (unknown [10.39.193.224]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id ED4101956053; Fri, 23 Aug 2024 16:21:41 +0000 (UTC) From: Kevin Traynor To: Konstantin Ananyev Cc: Isaac Boukris , =?UTF-8?q?Morten=20Br=C3=B8rup?= , Stephen Hemminger , dpdk stable Subject: patch 'bpf: fix load hangs with six IPv6 addresses' has been queued to stable release 21.11.8 Date: Fri, 23 Aug 2024 17:18:06 +0100 Message-ID: <20240823161929.1004778-58-ktraynor@redhat.com> In-Reply-To: <20240823161929.1004778-1-ktraynor@redhat.com> References: <20240823161929.1004778-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 21.11.8 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 08/28/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable/commit/7598b5b537e6b1b4f5f2409153359b7d04594baf Thanks. Kevin --- >From 7598b5b537e6b1b4f5f2409153359b7d04594baf Mon Sep 17 00:00:00 2001 From: Konstantin Ananyev Date: Thu, 27 Jun 2024 19:04:41 +0100 Subject: [PATCH] bpf: fix load hangs with six IPv6 addresses MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ upstream commit a258eebdfb22f95a8a44d31b0eab639aed0a0c4b ] As described in https://bugs.dpdk.org/show_bug.cgi?id=1465, converting from following cBPF filter: "host 1::1 or host 1::1 or host 1::1 or host 1::1 or host 1::1 or host 1::1" takes too long for BPF verifier to complete (up to 25 seconds). Looking at it, I didn't find any actual functional bug. In fact, it does what is expected: go through each possible path of BPF program and evaluate register/stack state for each instruction. The problem is that, for program with a lot of conditional branches, number of possible paths starts to grow exponentially and such walk becomes very excessive. So to minimize number of evaluations, this patch implements heuristic similar to what Linux kernel does: state pruning. If from given instruction for given program state, we explore all possible paths and for each of them reach bpf_exit() without any complaints and a valid R0 value, then for that instruction this program state can be marked as 'safe'. When we later arrive at the same instruction with a state equivalent to an earlier instruction 'safe' state, we can prune the search. For now, only states for JCC targets are saved/examined. Plus add few extra logging for DEBUG level. Bugzilla ID: 1465 Fixes: 8021917293d0 ("bpf: add extra validation for input BPF program") Reported-by: Isaac Boukris Signed-off-by: Konstantin Ananyev Acked-by: Morten Brørup Acked-by: Stephen Hemminger --- lib/bpf/bpf_validate.c | 305 ++++++++++++++++++++++++++++++++++------- 1 file changed, 255 insertions(+), 50 deletions(-) diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 81bede3701..dfbef6ca42 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -32,8 +32,11 @@ struct bpf_reg_val { struct bpf_eval_state { + SLIST_ENTRY(bpf_eval_state) next; /* for @safe list traversal */ struct bpf_reg_val rv[EBPF_REG_NUM]; struct bpf_reg_val sv[MAX_BPF_STACK_SIZE / sizeof(uint64_t)]; }; +SLIST_HEAD(bpf_evst_head, bpf_eval_state); + /* possible instruction node colour */ enum { @@ -55,4 +58,7 @@ enum { #define MAX_EDGES 2 +/* max number of 'safe' evaluated states to track per node */ +#define NODE_EVST_MAX 32 + struct inst_node { uint8_t colour; @@ -62,5 +68,16 @@ struct inst_node { uint32_t edge_dest[MAX_EDGES]; uint32_t prev_node; - struct bpf_eval_state *evst; + struct { + struct bpf_eval_state *cur; /* save/restore for jcc targets */ + struct bpf_eval_state *start; + struct bpf_evst_head safe; /* safe states for track/prune */ + uint32_t nb_safe; + } evst; +}; + +struct evst_pool { + uint32_t num; + uint32_t cur; + struct bpf_eval_state *ent; }; @@ -76,9 +93,6 @@ struct bpf_verifier { struct bpf_eval_state *evst; struct inst_node *evin; - struct { - uint32_t num; - uint32_t cur; - struct bpf_eval_state *ent; - } evst_pool; + struct evst_pool evst_sr_pool; /* for evst save/restore */ + struct evst_pool evst_tp_pool; /* for evst track/prune */ }; @@ -1088,5 +1102,5 @@ eval_jcc(struct bpf_verifier *bvf, const struct ebpf_insn *ins) tst = bvf->evst; - fst = bvf->evin->evst; + fst = bvf->evin->evst.cur; frd = fst->rv + ins->dst_reg; @@ -1817,6 +1831,6 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) if (nidx > bvf->prm->nb_ins) { - RTE_BPF_LOG(ERR, "%s: program boundary violation at pc: %u, " - "next pc: %u\n", + RTE_BPF_LOG(ERR, + "%s: program boundary violation at pc: %u, next pc: %u\n", __func__, get_node_idx(bvf, node), nidx); return -EINVAL; @@ -2094,20 +2108,21 @@ validate(struct bpf_verifier *bvf) */ static struct bpf_eval_state * -pull_eval_state(struct bpf_verifier *bvf) +pull_eval_state(struct evst_pool *pool) { uint32_t n; - n = bvf->evst_pool.cur; - if (n == bvf->evst_pool.num) + n = pool->cur; + if (n == pool->num) return NULL; - bvf->evst_pool.cur = n + 1; - return bvf->evst_pool.ent + n; + pool->cur = n + 1; + return pool->ent + n; } static void -push_eval_state(struct bpf_verifier *bvf) +push_eval_state(struct evst_pool *pool) { - bvf->evst_pool.cur--; + RTE_ASSERT(pool->cur != 0); + pool->cur--; } @@ -2116,6 +2131,7 @@ evst_pool_fini(struct bpf_verifier *bvf) { bvf->evst = NULL; - free(bvf->evst_pool.ent); - memset(&bvf->evst_pool, 0, sizeof(bvf->evst_pool)); + free(bvf->evst_sr_pool.ent); + memset(&bvf->evst_sr_pool, 0, sizeof(bvf->evst_sr_pool)); + memset(&bvf->evst_tp_pool, 0, sizeof(bvf->evst_tp_pool)); } @@ -2123,29 +2139,81 @@ static int evst_pool_init(struct bpf_verifier *bvf) { - uint32_t n; + uint32_t k, n; - n = bvf->nb_jcc_nodes + 1; + /* + * We need nb_jcc_nodes + 1 for save_cur/restore_cur + * remaining ones will be used for state tracking/pruning. + */ + k = bvf->nb_jcc_nodes + 1; + n = k * 3; - bvf->evst_pool.ent = calloc(n, sizeof(bvf->evst_pool.ent[0])); - if (bvf->evst_pool.ent == NULL) + bvf->evst_sr_pool.ent = calloc(n, sizeof(bvf->evst_sr_pool.ent[0])); + if (bvf->evst_sr_pool.ent == NULL) return -ENOMEM; - bvf->evst_pool.num = n; - bvf->evst_pool.cur = 0; + bvf->evst_sr_pool.num = k; + bvf->evst_sr_pool.cur = 0; - bvf->evst = pull_eval_state(bvf); + bvf->evst_tp_pool.ent = bvf->evst_sr_pool.ent + k; + bvf->evst_tp_pool.num = n - k; + bvf->evst_tp_pool.cur = 0; + + bvf->evst = pull_eval_state(&bvf->evst_sr_pool); return 0; } +/* + * try to allocate and initialise new eval state for given node. + * later if no errors will be encountered, this state will be accepted as + * one of the possible 'safe' states for that node. + */ +static void +save_start_eval_state(struct bpf_verifier *bvf, struct inst_node *node) +{ + RTE_ASSERT(node->evst.start == NULL); + + /* limit number of states for one node with some reasonable value */ + if (node->evst.nb_safe >= NODE_EVST_MAX) + return; + + /* try to get new eval_state */ + node->evst.start = pull_eval_state(&bvf->evst_tp_pool); + + /* make a copy of current state */ + if (node->evst.start != NULL) { + memcpy(node->evst.start, bvf->evst, sizeof(*node->evst.start)); + SLIST_NEXT(node->evst.start, next) = NULL; + } +} + +/* + * add @start state to the list of @safe states. + */ +static void +save_safe_eval_state(struct bpf_verifier *bvf, struct inst_node *node) +{ + if (node->evst.start == NULL) + return; + + SLIST_INSERT_HEAD(&node->evst.safe, node->evst.start, next); + node->evst.nb_safe++; + + RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u,state=%p): nb_safe=%u;\n", + __func__, bvf, get_node_idx(bvf, node), node->evst.start, + node->evst.nb_safe); + + node->evst.start = NULL; +} + /* * Save current eval state. */ static int -save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) +save_cur_eval_state(struct bpf_verifier *bvf, struct inst_node *node) { struct bpf_eval_state *st; /* get new eval_state for this node */ - st = pull_eval_state(bvf); + st = pull_eval_state(&bvf->evst_sr_pool); if (st == NULL) { RTE_BPF_LOG(ERR, @@ -2159,9 +2227,11 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) /* swap current state with new one */ - node->evst = bvf->evst; + RTE_ASSERT(node->evst.cur == NULL); + node->evst.cur = bvf->evst; bvf->evst = st; RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", - __func__, bvf, get_node_idx(bvf, node), node->evst, bvf->evst); + __func__, bvf, get_node_idx(bvf, node), node->evst.cur, + bvf->evst); return 0; @@ -2172,12 +2242,13 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) */ static void -restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) +restore_cur_eval_state(struct bpf_verifier *bvf, struct inst_node *node) { RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", - __func__, bvf, get_node_idx(bvf, node), bvf->evst, node->evst); + __func__, bvf, get_node_idx(bvf, node), bvf->evst, + node->evst.cur); - bvf->evst = node->evst; - node->evst = NULL; - push_eval_state(bvf); + bvf->evst = node->evst.cur; + node->evst.cur = NULL; + push_eval_state(&bvf->evst_sr_pool); } @@ -2196,5 +2267,5 @@ log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, rte_log(loglvl, rte_bpf_logtype, "r%u={\n" - "\tv={type=%u, size=%zu},\n" + "\tv={type=%u, size=%zu, buf_size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" "\tu={min=0x%" PRIx64 ", max=0x%" PRIx64 "},\n" @@ -2202,5 +2273,5 @@ log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, "};\n", ins->dst_reg, - rv->v.type, rv->v.size, + rv->v.type, rv->v.size, rv->v.buf_size, rv->mask, rv->u.min, rv->u.max, @@ -2209,11 +2280,109 @@ log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, /* - * Do second pass through CFG and try to evaluate instructions - * via each possible path. - * Right now evaluation functionality is quite limited. - * Still need to add extra checks for: - * - use/return uninitialized registers. - * - use uninitialized data from the stack. - * - memory boundaries violation. + * compare two evaluation states. + * returns zero if @lv is more conservative (safer) then @rv. + * returns non-zero value otherwise. + */ +static int +cmp_reg_val_within(const struct bpf_reg_val *lv, const struct bpf_reg_val *rv) +{ + /* expect @v and @mask to be identical */ + if (memcmp(&lv->v, &rv->v, sizeof(lv->v)) != 0 || lv->mask != rv->mask) + return -1; + + /* exact match only for mbuf and stack pointers */ + if (lv->v.type == RTE_BPF_ARG_PTR_MBUF || + lv->v.type == BPF_ARG_PTR_STACK) + return -1; + + if (lv->u.min <= rv->u.min && lv->u.max >= rv->u.max && + lv->s.min <= rv->s.min && lv->s.max >= rv->s.max) + return 0; + + return -1; +} + +/* + * compare two evaluation states. + * returns zero if they are identical. + * returns positive value if @lv is more conservative (safer) then @rv. + * returns negative value otherwise. + */ +static int +cmp_eval_state(const struct bpf_eval_state *lv, const struct bpf_eval_state *rv) +{ + int32_t rc; + uint32_t i, k; + + /* for stack expect identical values */ + rc = memcmp(lv->sv, rv->sv, sizeof(lv->sv)); + if (rc != 0) + return -(2 * EBPF_REG_NUM); + + k = 0; + /* check register values */ + for (i = 0; i != RTE_DIM(lv->rv); i++) { + rc = memcmp(&lv->rv[i], &rv->rv[i], sizeof(lv->rv[i])); + if (rc != 0 && cmp_reg_val_within(&lv->rv[i], &rv->rv[i]) != 0) + return -(i + 1); + k += (rc != 0); + } + + return k; +} + +/* + * check did we already evaluated that path and can it be pruned that time. + */ +static int +prune_eval_state(struct bpf_verifier *bvf, const struct inst_node *node, + struct inst_node *next) +{ + int32_t rc; + struct bpf_eval_state *safe; + + rc = INT32_MIN; + SLIST_FOREACH(safe, &next->evst.safe, next) { + rc = cmp_eval_state(safe, bvf->evst); + if (rc >= 0) + break; + } + + rc = (rc >= 0) ? 0 : -1; + + /* + * current state doesn't match any safe states, + * so no prunning is possible right now, + * track current state for future references. + */ + if (rc != 0) + save_start_eval_state(bvf, next); + + RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u,next=%u) returns %d, " + "next->evst.start=%p, next->evst.nb_safe=%u\n", + __func__, bvf, get_node_idx(bvf, node), + get_node_idx(bvf, next), rc, + next->evst.start, next->evst.nb_safe); + return rc; +} + +/* Do second pass through CFG and try to evaluate instructions + * via each possible path. The verifier will try all paths, tracking types of + * registers used as input to instructions, and updating resulting type via + * register state values. Plus for each register and possible stack value it + * tries to estimate possible max/min value. + * For conditional jumps, a stack is used to save evaluation state, so one + * path is explored while the state for the other path is pushed onto the stack. + * Then later, we backtrack to the first pushed instruction and repeat the cycle + * until the stack is empty and we're done. + * For program with many conditional branches walking through all possible path + * could be very excessive. So to minimize number of evaluations we use + * heuristic similar to what Linux kernel does - state pruning: + * If from given instruction for given program state we explore all possible + * paths and for each of them reach _exit() without any complaints and a valid + * R0 value, then for that instruction, that program state can be marked as + * 'safe'. When we later arrive at the same instruction with a state + * equivalent to an earlier instruction's 'safe' state, we can prune the search. + * For now, only states for JCC targets are saved/examined. */ static int @@ -2226,4 +2395,11 @@ evaluate(struct bpf_verifier *bvf) struct inst_node *next, *node; + struct { + uint32_t nb_eval; + uint32_t nb_prune; + uint32_t nb_save; + uint32_t nb_restore; + } stats; + /* initial state of frame pointer */ static const struct bpf_reg_val rvfp = { @@ -2249,4 +2425,6 @@ evaluate(struct bpf_verifier *bvf) rc = 0; + memset(&stats, 0, sizeof(stats)); + while (node != NULL && rc == 0) { @@ -2262,9 +2440,12 @@ evaluate(struct bpf_verifier *bvf) /* for jcc node make a copy of evaluation state */ - if (node->nb_edge > 1) - rc |= save_eval_state(bvf, node); + if (node->nb_edge > 1) { + rc |= save_cur_eval_state(bvf, node); + stats.nb_save++; + } if (ins_chk[op].eval != NULL && rc == 0) { err = ins_chk[op].eval(bvf, ins + idx); + stats.nb_eval++; if (err != NULL) { RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", @@ -2280,19 +2461,35 @@ evaluate(struct bpf_verifier *bvf) /* proceed through CFG */ next = get_next_node(bvf, node); + if (next != NULL) { /* proceed with next child */ if (node->cur_edge == node->nb_edge && - node->evst != NULL) - restore_eval_state(bvf, node); + node->evst.cur != NULL) { + restore_cur_eval_state(bvf, node); + stats.nb_restore++; + } - next->prev_node = get_node_idx(bvf, node); - node = next; + /* + * for jcc targets: check did we already evaluated + * that path and can it's evaluation be skipped that + * time. + */ + if (node->nb_edge > 1 && prune_eval_state(bvf, node, + next) == 0) { + next = NULL; + stats.nb_prune++; + } else { + next->prev_node = get_node_idx(bvf, node); + node = next; + } } else { /* * finished with current node and all it's kids, - * proceed with parent + * mark it's @start state as safe for future references, + * and proceed with parent. */ node->cur_edge = 0; + save_safe_eval_state(bvf, node); node = get_prev_node(bvf, node); @@ -2303,4 +2500,12 @@ evaluate(struct bpf_verifier *bvf) } + RTE_BPF_LOG(DEBUG, "%s(%p) returns %d, stats:\n" + "node evaluations=%u;\n" + "state pruned=%u;\n" + "state saves=%u;\n" + "state restores=%u;\n", + __func__, bvf, rc, + stats.nb_eval, stats.nb_prune, stats.nb_save, stats.nb_restore); + return rc; } -- 2.46.0 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-08-23 17:18:11.601180936 +0100 +++ 0058-bpf-fix-load-hangs-with-six-IPv6-addresses.patch 2024-08-23 17:18:09.722430112 +0100 @@ -1 +1 @@ -From a258eebdfb22f95a8a44d31b0eab639aed0a0c4b Mon Sep 17 00:00:00 2001 +From 7598b5b537e6b1b4f5f2409153359b7d04594baf Mon Sep 17 00:00:00 2001 @@ -8,0 +9,2 @@ +[ upstream commit a258eebdfb22f95a8a44d31b0eab639aed0a0c4b ] + @@ -38 +39,0 @@ -Cc: stable@dpdk.org @@ -49 +50 @@ -index 11344fff4d..4f47d6dc7b 100644 +index 81bede3701..dfbef6ca42 100644 @@ -52 +53 @@ -@@ -30,8 +30,11 @@ struct bpf_reg_val { +@@ -32,8 +32,11 @@ struct bpf_reg_val { @@ -64 +65 @@ -@@ -53,4 +56,7 @@ enum { +@@ -55,4 +58,7 @@ enum { @@ -72 +73 @@ -@@ -60,5 +66,16 @@ struct inst_node { +@@ -62,5 +68,16 @@ struct inst_node { @@ -90 +91 @@ -@@ -74,9 +91,6 @@ struct bpf_verifier { +@@ -76,9 +93,6 @@ struct bpf_verifier { @@ -102 +103 @@ -@@ -1086,5 +1100,5 @@ eval_jcc(struct bpf_verifier *bvf, const struct ebpf_insn *ins) +@@ -1088,5 +1102,5 @@ eval_jcc(struct bpf_verifier *bvf, const struct ebpf_insn *ins) @@ -109 +110 @@ -@@ -1815,6 +1829,6 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) +@@ -1817,6 +1831,6 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) @@ -112,4 +113,4 @@ -- RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, " -- "next pc: %u", -+ RTE_BPF_LOG_LINE(ERR, -+ "%s: program boundary violation at pc: %u, next pc: %u", +- RTE_BPF_LOG(ERR, "%s: program boundary violation at pc: %u, " +- "next pc: %u\n", ++ RTE_BPF_LOG(ERR, ++ "%s: program boundary violation at pc: %u, next pc: %u\n", @@ -118 +119 @@ -@@ -2092,20 +2106,21 @@ validate(struct bpf_verifier *bvf) +@@ -2094,20 +2108,21 @@ validate(struct bpf_verifier *bvf) @@ -147 +148 @@ -@@ -2114,6 +2129,7 @@ evst_pool_fini(struct bpf_verifier *bvf) +@@ -2116,6 +2131,7 @@ evst_pool_fini(struct bpf_verifier *bvf) @@ -157 +158 @@ -@@ -2121,29 +2137,81 @@ static int +@@ -2123,29 +2139,81 @@ static int @@ -227 +228 @@ -+ RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u,state=%p): nb_safe=%u;", ++ RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u,state=%p): nb_safe=%u;\n", @@ -247,2 +248,2 @@ - RTE_BPF_LOG_LINE(ERR, -@@ -2157,9 +2225,11 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) + RTE_BPF_LOG(ERR, +@@ -2159,9 +2227,11 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) @@ -256 +257 @@ - RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", + RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", @@ -262 +263 @@ -@@ -2170,12 +2240,13 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) +@@ -2172,12 +2242,13 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) @@ -268 +269 @@ - RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", + RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", @@ -281,2 +282,2 @@ -@@ -2194,5 +2265,5 @@ log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, - RTE_LOG(DEBUG, BPF, +@@ -2196,5 +2267,5 @@ log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, + rte_log(loglvl, rte_bpf_logtype, @@ -288 +289 @@ -@@ -2200,5 +2271,5 @@ log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, +@@ -2202,5 +2273,5 @@ log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, @@ -295 +296 @@ -@@ -2207,11 +2278,109 @@ log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, +@@ -2209,11 +2280,109 @@ log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, @@ -384,2 +385,2 @@ -+ RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u,next=%u) returns %d, " -+ "next->evst.start=%p, next->evst.nb_safe=%u", ++ RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u,next=%u) returns %d, " ++ "next->evst.start=%p, next->evst.nb_safe=%u\n", @@ -412 +413 @@ -@@ -2224,4 +2393,11 @@ evaluate(struct bpf_verifier *bvf) +@@ -2226,4 +2395,11 @@ evaluate(struct bpf_verifier *bvf) @@ -424 +425 @@ -@@ -2247,4 +2423,6 @@ evaluate(struct bpf_verifier *bvf) +@@ -2249,4 +2425,6 @@ evaluate(struct bpf_verifier *bvf) @@ -431 +432 @@ -@@ -2260,9 +2438,12 @@ evaluate(struct bpf_verifier *bvf) +@@ -2262,9 +2440,12 @@ evaluate(struct bpf_verifier *bvf) @@ -445,2 +446,2 @@ - RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u", -@@ -2278,19 +2459,35 @@ evaluate(struct bpf_verifier *bvf) + RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", +@@ -2280,19 +2461,35 @@ evaluate(struct bpf_verifier *bvf) @@ -487 +488 @@ -@@ -2301,4 +2498,12 @@ evaluate(struct bpf_verifier *bvf) +@@ -2303,4 +2500,12 @@ evaluate(struct bpf_verifier *bvf) @@ -490 +491 @@ -+ RTE_LOG(DEBUG, BPF, "%s(%p) returns %d, stats:\n" ++ RTE_BPF_LOG(DEBUG, "%s(%p) returns %d, stats:\n"