From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 30EE7A00C3; Fri, 13 May 2022 18:08:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A1B9E42849; Fri, 13 May 2022 18:07:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 24A0642846 for ; Fri, 13 May 2022 18:07:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24DCHXoQ010448 for ; Fri, 13 May 2022 09:07:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=zkCCQU+urGOTKauoAl0OUGlGnFNsEMrpXxJeA5NaEJA=; b=HDDe8051R2XrYIsmwuDOL1fKc6qHCADRM7RiyWEzc1n4G92M4tCWJMIt7++AtnQgZ2Ez Cu9+IGe4p0OXgnUrhQu/Ek4BUjSEejbVtvsgN/mRgqpD16FuRm76GJSMsB8bQvI2saSD DE6pr2fYnB+U0aPz5mnWXp+8ehZj/tbt3c1SLvUKJJ+PffgOQUb6PzHI9vsV8a7ohe+J IxcdKY0hL6MryLljeN/RRdMeMtjinB3j4l0J3rJbAcUHO+djcPnBfOXThktPXN6XdJhb puSA30OATVPFpMOmwiWC3ilfbREHWqrH1WBnlmPL2SrHUsabjTYUv/EAtcz9pv2QfqtW 7A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g1c37b43j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 13 May 2022 09:07:33 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 13 May 2022 09:07:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 13 May 2022 09:07:31 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.70.72]) by maili.marvell.com (Postfix) with ESMTP id D5C2C3F7072; Fri, 13 May 2022 09:07:29 -0700 (PDT) From: To: , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Subject: [PATCH v2 5/6] examples/l2fwd-event: clean up worker state before exit Date: Fri, 13 May 2022 21:37:18 +0530 Message-ID: <20220513160719.10558-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220513160719.10558-1-pbhagavatula@marvell.com> References: <20220426211412.6138-1-pbhagavatula@marvell.com> <20220513160719.10558-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: d6GRLeak-gmmYUJwZ6BZkr9EdP-xV99C X-Proofpoint-ORIG-GUID: d6GRLeak-gmmYUJwZ6BZkr9EdP-xV99C X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-13_08,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Event ports are configured to implicitly release the scheduler contexts currently held in the next call to rte_event_dequeue_burst(). A worker core might still hold a scheduling context during exit, as the next call to rte_event_dequeue_burst() is never made. This might lead to deadlock based on the worker exit timing and when there are very less number of flows. Add clean up function to release any scheduling contexts held by the worker by using RTE_EVENT_OP_RELEASE. Signed-off-by: Pavan Nikhilesh --- examples/l2fwd-event/l2fwd_common.c | 34 +++++++++++++++++++++++++++++ examples/l2fwd-event/l2fwd_common.h | 3 +++ examples/l2fwd-event/l2fwd_event.c | 31 ++++++++++++++++---------- 3 files changed, 56 insertions(+), 12 deletions(-) diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c index cf3d1b8aaf..15bfe790a0 100644 --- a/examples/l2fwd-event/l2fwd_common.c +++ b/examples/l2fwd-event/l2fwd_common.c @@ -114,3 +114,37 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc) return nb_ports_available; } + +static void +l2fwd_event_vector_array_free(struct rte_event events[], uint16_t num) +{ + uint16_t i; + + for (i = 0; i < num; i++) { + rte_pktmbuf_free_bulk(events[i].vec->mbufs, + events[i].vec->nb_elem); + rte_mempool_put(rte_mempool_from_obj(events[i].vec), + events[i].vec); + } +} + +void +l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, + struct rte_event events[], uint16_t nb_enq, + uint16_t nb_deq, uint8_t is_vector) +{ + int i; + + if (nb_deq) { + if (is_vector) + l2fwd_event_vector_array_free(events + nb_enq, + nb_deq - nb_enq); + else + for (i = nb_enq; i < nb_deq; i++) + rte_pktmbuf_free(events[i].mbuf); + + for (i = 0; i < nb_deq; i++) + events[i].op = RTE_EVENT_OP_RELEASE; + rte_event_enqueue_burst(event_d_id, port_id, events, nb_deq); + } +} diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h index 396e238c6a..bff3b65abf 100644 --- a/examples/l2fwd-event/l2fwd_common.h +++ b/examples/l2fwd-event/l2fwd_common.h @@ -140,5 +140,8 @@ l2fwd_get_rsrc(void) } int l2fwd_event_init_ports(struct l2fwd_resources *rsrc); +void l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, + struct rte_event events[], uint16_t nb_enq, + uint16_t nb_deq, uint8_t is_vector); #endif /* __L2FWD_COMMON_H__ */ diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c index 6df3cdfeab..63450537fe 100644 --- a/examples/l2fwd-event/l2fwd_event.c +++ b/examples/l2fwd-event/l2fwd_event.c @@ -193,6 +193,7 @@ l2fwd_event_loop_single(struct l2fwd_resources *rsrc, evt_rsrc->evq.nb_queues - 1]; const uint64_t timer_period = rsrc->timer_period; const uint8_t event_d_id = evt_rsrc->event_d_id; + uint8_t enq = 0, deq = 0; struct rte_event ev; if (port_id < 0) @@ -203,26 +204,28 @@ l2fwd_event_loop_single(struct l2fwd_resources *rsrc, while (!rsrc->force_quit) { /* Read packet from eventdev */ - if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0)) + deq = rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0); + if (!deq) continue; l2fwd_event_fwd(rsrc, &ev, tx_q_id, timer_period, flags); if (flags & L2FWD_EVENT_TX_ENQ) { - while (rte_event_enqueue_burst(event_d_id, port_id, - &ev, 1) && - !rsrc->force_quit) - ; + do { + enq = rte_event_enqueue_burst(event_d_id, + port_id, &ev, 1); + } while (!enq && !rsrc->force_quit); } if (flags & L2FWD_EVENT_TX_DIRECT) { - while (!rte_event_eth_tx_adapter_enqueue(event_d_id, - port_id, - &ev, 1, 0) && - !rsrc->force_quit) - ; + do { + enq = rte_event_eth_tx_adapter_enqueue( + event_d_id, port_id, &ev, 1, 0); + } while (!enq && !rsrc->force_quit); } } + + l2fwd_event_worker_cleanup(event_d_id, port_id, &ev, enq, deq, 0); } static __rte_always_inline void @@ -237,7 +240,7 @@ l2fwd_event_loop_burst(struct l2fwd_resources *rsrc, const uint8_t event_d_id = evt_rsrc->event_d_id; const uint8_t deq_len = evt_rsrc->deq_depth; struct rte_event ev[MAX_PKT_BURST]; - uint16_t nb_rx, nb_tx; + uint16_t nb_rx = 0, nb_tx = 0; uint8_t i; if (port_id < 0) @@ -280,6 +283,8 @@ l2fwd_event_loop_burst(struct l2fwd_resources *rsrc, ev + nb_tx, nb_rx - nb_tx, 0); } } + + l2fwd_event_worker_cleanup(event_d_id, port_id, ev, nb_rx, nb_tx, 0); } static __rte_always_inline void @@ -419,7 +424,7 @@ l2fwd_event_loop_vector(struct l2fwd_resources *rsrc, const uint32_t flags) const uint8_t event_d_id = evt_rsrc->event_d_id; const uint8_t deq_len = evt_rsrc->deq_depth; struct rte_event ev[MAX_PKT_BURST]; - uint16_t nb_rx, nb_tx; + uint16_t nb_rx = 0, nb_tx = 0; uint8_t i; if (port_id < 0) @@ -462,6 +467,8 @@ l2fwd_event_loop_vector(struct l2fwd_resources *rsrc, const uint32_t flags) nb_rx - nb_tx, 0); } } + + l2fwd_event_worker_cleanup(event_d_id, port_id, ev, nb_rx, nb_tx, 1); } static void __rte_noinline -- 2.25.1