From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 664FDA00C3; Fri, 13 May 2022 18:07:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B407442841; Fri, 13 May 2022 18:07:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8F5624283B for ; Fri, 13 May 2022 18:07:31 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24DCSlDq010438 for ; Fri, 13 May 2022 09:07:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=7qANIApisQOiiu333JyQ4lnnq1yMNGle9G6/ryUmuFE=; b=dc2S/al5KNfCI06qdMlI5NnDhArFVw7gjRYro8OK1Ra9nbdvUOKuNpa9WnJ/4ZUuSEpG Ix/MqxHoDeSXQT6SYIcc9t5Z/gWbRW0uaadKfmHDrvAS2jnQCDWFvPk11uR3w7EShj+o bDgHxoZtWkPyKWe45VwufKmuL+In3cDvb9NiZ3mXZpikL2yriTwIbFFd5t+sqbLWWGKF aC5ZIAUYRhppvGTgGtpqlT1H9K+L2eYqFZQr/KyXhWVkPsNhQdhRjDjc3R1eVxRmHTxK YgriuI+wicXffE9rE9uOkBAOX9QLhAfaVWskGPcBZi7J1eRt3cRWBkkyZ9mAvGkbqcDS XQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g1c37b43a-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 13 May 2022 09:07:30 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 13 May 2022 09:07:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 13 May 2022 09:07:29 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.70.72]) by maili.marvell.com (Postfix) with ESMTP id DA7503F7070; Fri, 13 May 2022 09:07:27 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Subject: [PATCH v2 4/6] examples/l3fwd: clean up worker state before exit Date: Fri, 13 May 2022 21:37:17 +0530 Message-ID: <20220513160719.10558-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220513160719.10558-1-pbhagavatula@marvell.com> References: <20220426211412.6138-1-pbhagavatula@marvell.com> <20220513160719.10558-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: t9fZTSYQey6uf3_kEOn0sUj47Ahg5YDk X-Proofpoint-ORIG-GUID: t9fZTSYQey6uf3_kEOn0sUj47Ahg5YDk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-13_08,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Event ports are configured to implicitly release the scheduler contexts currently held in the next call to rte_event_dequeue_burst(). A worker core might still hold a scheduling context during exit, as the next call to rte_event_dequeue_burst() is never made. This might lead to deadlock based on the worker exit timing and when there are very less number of flows. Add clean up function to release any scheduling contexts held by the worker by using RTE_EVENT_OP_RELEASE. Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd_em.c | 32 ++++++++++++++++++++++---------- examples/l3fwd/l3fwd_event.c | 34 ++++++++++++++++++++++++++++++++++ examples/l3fwd/l3fwd_event.h | 5 +++++ examples/l3fwd/l3fwd_fib.c | 10 ++++++++-- examples/l3fwd/l3fwd_lpm.c | 32 ++++++++++++++++++++++---------- 5 files changed, 91 insertions(+), 22 deletions(-) diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c index 24d0910fe0..6f8d94f120 100644 --- a/examples/l3fwd/l3fwd_em.c +++ b/examples/l3fwd/l3fwd_em.c @@ -653,6 +653,7 @@ em_event_loop_single(struct l3fwd_event_resources *evt_rsrc, const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[ evt_rsrc->evq.nb_queues - 1]; const uint8_t event_d_id = evt_rsrc->event_d_id; + uint8_t deq = 0, enq = 0; struct lcore_conf *lconf; unsigned int lcore_id; struct rte_event ev; @@ -665,7 +666,9 @@ em_event_loop_single(struct l3fwd_event_resources *evt_rsrc, RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id); while (!force_quit) { - if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0)) + deq = rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, + 0); + if (!deq) continue; struct rte_mbuf *mbuf = ev.mbuf; @@ -684,19 +687,22 @@ em_event_loop_single(struct l3fwd_event_resources *evt_rsrc, if (flags & L3FWD_EVENT_TX_ENQ) { ev.queue_id = tx_q_id; ev.op = RTE_EVENT_OP_FORWARD; - while (rte_event_enqueue_burst(event_d_id, event_p_id, - &ev, 1) && !force_quit) - ; + do { + enq = rte_event_enqueue_burst( + event_d_id, event_p_id, &ev, 1); + } while (!enq && !force_quit); } if (flags & L3FWD_EVENT_TX_DIRECT) { rte_event_eth_tx_adapter_txq_set(mbuf, 0); - while (!rte_event_eth_tx_adapter_enqueue(event_d_id, - event_p_id, &ev, 1, 0) && - !force_quit) - ; + do { + enq = rte_event_eth_tx_adapter_enqueue( + event_d_id, event_p_id, &ev, 1, 0); + } while (!enq && !force_quit); } } + + l3fwd_event_worker_cleanup(event_d_id, event_p_id, &ev, enq, deq, 0); } static __rte_always_inline void @@ -709,9 +715,9 @@ em_event_loop_burst(struct l3fwd_event_resources *evt_rsrc, const uint8_t event_d_id = evt_rsrc->event_d_id; const uint16_t deq_len = evt_rsrc->deq_depth; struct rte_event events[MAX_PKT_BURST]; + int i, nb_enq = 0, nb_deq = 0; struct lcore_conf *lconf; unsigned int lcore_id; - int i, nb_enq, nb_deq; if (event_p_id < 0) return; @@ -769,6 +775,9 @@ em_event_loop_burst(struct l3fwd_event_resources *evt_rsrc, nb_deq - nb_enq, 0); } } + + l3fwd_event_worker_cleanup(event_d_id, event_p_id, events, nb_enq, + nb_deq, 0); } static __rte_always_inline void @@ -832,9 +841,9 @@ em_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, const uint8_t event_d_id = evt_rsrc->event_d_id; const uint16_t deq_len = evt_rsrc->deq_depth; struct rte_event events[MAX_PKT_BURST]; + int i, nb_enq = 0, nb_deq = 0; struct lcore_conf *lconf; unsigned int lcore_id; - int i, nb_enq, nb_deq; if (event_p_id < 0) return; @@ -887,6 +896,9 @@ em_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, nb_deq - nb_enq, 0); } } + + l3fwd_event_worker_cleanup(event_d_id, event_p_id, events, nb_enq, + nb_deq, 1); } int __rte_noinline diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c index 7a401290f8..a14a21b414 100644 --- a/examples/l3fwd/l3fwd_event.c +++ b/examples/l3fwd/l3fwd_event.c @@ -287,3 +287,37 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf) fib_event_loop[evt_rsrc->vector_enabled][evt_rsrc->tx_mode_q] [evt_rsrc->has_burst]; } + +static void +l3fwd_event_vector_array_free(struct rte_event events[], uint16_t num) +{ + uint16_t i; + + for (i = 0; i < num; i++) { + rte_pktmbuf_free_bulk(events[i].vec->mbufs, + events[i].vec->nb_elem); + rte_mempool_put(rte_mempool_from_obj(events[i].vec), + events[i].vec); + } +} + +void +l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, + struct rte_event events[], uint16_t nb_enq, + uint16_t nb_deq, uint8_t is_vector) +{ + int i; + + if (nb_deq) { + if (is_vector) + l3fwd_event_vector_array_free(events + nb_enq, + nb_deq - nb_enq); + else + for (i = nb_enq; i < nb_deq; i++) + rte_pktmbuf_free(events[i].mbuf); + + for (i = 0; i < nb_deq; i++) + events[i].op = RTE_EVENT_OP_RELEASE; + rte_event_enqueue_burst(event_d_id, event_p_id, events, nb_deq); + } +} diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h index f139632016..b93841a16f 100644 --- a/examples/l3fwd/l3fwd_event.h +++ b/examples/l3fwd/l3fwd_event.h @@ -103,10 +103,15 @@ event_vector_txq_set(struct rte_event_vector *vec, uint16_t txq) } } + + struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void); void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf); int l3fwd_get_free_event_port(struct l3fwd_event_resources *eventdev_rsrc); void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops); void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops); +void l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, + struct rte_event events[], uint16_t nb_enq, + uint16_t nb_deq, uint8_t is_vector); #endif /* __L3FWD_EVENTDEV_H__ */ diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c index 6e0054b4cb..26d0767ae2 100644 --- a/examples/l3fwd/l3fwd_fib.c +++ b/examples/l3fwd/l3fwd_fib.c @@ -252,9 +252,9 @@ fib_event_loop(struct l3fwd_event_resources *evt_rsrc, const uint8_t event_d_id = evt_rsrc->event_d_id; const uint16_t deq_len = evt_rsrc->deq_depth; struct rte_event events[MAX_PKT_BURST]; + int i, nb_enq = 0, nb_deq = 0; struct lcore_conf *lconf; unsigned int lcore_id; - int nb_enq, nb_deq, i; uint32_t ipv4_arr[MAX_PKT_BURST]; uint8_t ipv6_arr[MAX_PKT_BURST][RTE_FIB6_IPV6_ADDR_SIZE]; @@ -370,6 +370,9 @@ fib_event_loop(struct l3fwd_event_resources *evt_rsrc, nb_deq - nb_enq, 0); } } + + l3fwd_event_worker_cleanup(event_d_id, event_p_id, events, nb_enq, + nb_deq, 0); } int __rte_noinline @@ -491,7 +494,7 @@ fib_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, const uint8_t event_d_id = evt_rsrc->event_d_id; const uint16_t deq_len = evt_rsrc->deq_depth; struct rte_event events[MAX_PKT_BURST]; - int nb_enq, nb_deq, i; + int nb_enq = 0, nb_deq = 0, i; if (event_p_id < 0) return; @@ -538,6 +541,9 @@ fib_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, nb_deq - nb_enq, 0); } } + + l3fwd_event_worker_cleanup(event_d_id, event_p_id, events, nb_enq, + nb_deq, 1); } int __rte_noinline diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c index bec22c44cd..501fc5db5e 100644 --- a/examples/l3fwd/l3fwd_lpm.c +++ b/examples/l3fwd/l3fwd_lpm.c @@ -273,6 +273,7 @@ lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc, const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[ evt_rsrc->evq.nb_queues - 1]; const uint8_t event_d_id = evt_rsrc->event_d_id; + uint8_t enq = 0, deq = 0; struct lcore_conf *lconf; unsigned int lcore_id; struct rte_event ev; @@ -285,7 +286,9 @@ lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc, RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id); while (!force_quit) { - if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0)) + deq = rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, + 0); + if (!deq) continue; if (lpm_process_event_pkt(lconf, ev.mbuf) == BAD_PORT) { @@ -296,19 +299,22 @@ lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc, if (flags & L3FWD_EVENT_TX_ENQ) { ev.queue_id = tx_q_id; ev.op = RTE_EVENT_OP_FORWARD; - while (rte_event_enqueue_burst(event_d_id, event_p_id, - &ev, 1) && !force_quit) - ; + do { + enq = rte_event_enqueue_burst( + event_d_id, event_p_id, &ev, 1); + } while (!enq && !force_quit); } if (flags & L3FWD_EVENT_TX_DIRECT) { rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0); - while (!rte_event_eth_tx_adapter_enqueue(event_d_id, - event_p_id, &ev, 1, 0) && - !force_quit) - ; + do { + enq = rte_event_eth_tx_adapter_enqueue( + event_d_id, event_p_id, &ev, 1, 0); + } while (!enq && !force_quit); } } + + l3fwd_event_worker_cleanup(event_d_id, event_p_id, &ev, enq, deq, 0); } static __rte_always_inline void @@ -321,9 +327,9 @@ lpm_event_loop_burst(struct l3fwd_event_resources *evt_rsrc, const uint8_t event_d_id = evt_rsrc->event_d_id; const uint16_t deq_len = evt_rsrc->deq_depth; struct rte_event events[MAX_PKT_BURST]; + int i, nb_enq = 0, nb_deq = 0; struct lcore_conf *lconf; unsigned int lcore_id; - int i, nb_enq, nb_deq; if (event_p_id < 0) return; @@ -375,6 +381,9 @@ lpm_event_loop_burst(struct l3fwd_event_resources *evt_rsrc, nb_deq - nb_enq, 0); } } + + l3fwd_event_worker_cleanup(event_d_id, event_p_id, events, nb_enq, + nb_deq, 0); } static __rte_always_inline void @@ -459,9 +468,9 @@ lpm_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, const uint8_t event_d_id = evt_rsrc->event_d_id; const uint16_t deq_len = evt_rsrc->deq_depth; struct rte_event events[MAX_PKT_BURST]; + int i, nb_enq = 0, nb_deq = 0; struct lcore_conf *lconf; unsigned int lcore_id; - int i, nb_enq, nb_deq; if (event_p_id < 0) return; @@ -510,6 +519,9 @@ lpm_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, nb_deq - nb_enq, 0); } } + + l3fwd_event_worker_cleanup(event_d_id, event_p_id, events, nb_enq, + nb_deq, 1); } int __rte_noinline -- 2.25.1