From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13000A00C3; Fri, 13 May 2022 18:08:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D98F942854; Fri, 13 May 2022 18:07:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 693B242846 for ; Fri, 13 May 2022 18:07:36 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24D5upZq007758; Fri, 13 May 2022 09:07:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=mhqr6+AP2hLve113JIRFP1K5apMWuQo49MyoMVM+zB4=; b=DgV3AWQLofRVv+acHSXReX3o0eXdNYHzalKNKzS1WRjGfiUXtWRw/egNydDuuxmkrRHw AWQYJbNFcpmdl2NNLXv1TkHJyWEa+NA5gfNbdpirZXq1TyXrbepg0sdpcgGF9Us9BLv3 vMB0Esq1K4L0ChhJwUdkDrqvlD/CYokzpTunfAWje4hvB7c++8bBCcmpQko4x2Yf0jsH ItTqAQXnv5paWlmMVGl12/XuIj8Yha6jQ/ALBlpsskwkabjuc9UF9tANSMQdTzf7dH7z pVQkk9HObyi4pyqpGu6aArJzP7SscXXBMspSPXkhiZHENjv7MZOMLLwDOtnJMDoTOdeU 9A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g0yqwpp92-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 13 May 2022 09:07:35 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 13 May 2022 09:07:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 13 May 2022 09:07:33 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.70.72]) by maili.marvell.com (Postfix) with ESMTP id 0ED8E3F706F; Fri, 13 May 2022 09:07:31 -0700 (PDT) From: To: , Radu Nicolau , Akhil Goyal CC: , Pavan Nikhilesh Subject: [PATCH v2 6/6] examples/ipsec-secgw: cleanup worker state before exit Date: Fri, 13 May 2022 21:37:19 +0530 Message-ID: <20220513160719.10558-6-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220513160719.10558-1-pbhagavatula@marvell.com> References: <20220426211412.6138-1-pbhagavatula@marvell.com> <20220513160719.10558-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: LqxDUdp3-PRTyh_PPvlrDqfYww26ldDR X-Proofpoint-GUID: LqxDUdp3-PRTyh_PPvlrDqfYww26ldDR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-13_08,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Event ports are configured to implicitly release the scheduler contexts currently held in the next call to rte_event_dequeue_burst(). A worker core might still hold a scheduling context during exit as the next call to rte_event_dequeue_burst() is never made. This might lead to deadlock based on the worker exit timing and when there are very less number of flows. Add a cleanup function to release any scheduling contexts held by the worker by using RTE_EVENT_OP_RELEASE. Signed-off-by: Pavan Nikhilesh --- examples/ipsec-secgw/ipsec_worker.c | 40 ++++++++++++++++++++--------- 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 8639426c5c..3df5acf384 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -749,7 +749,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, uint8_t nb_links) { struct port_drv_mode_data data[RTE_MAX_ETHPORTS]; - unsigned int nb_rx = 0; + unsigned int nb_rx = 0, nb_tx; struct rte_mbuf *pkt; struct rte_event ev; uint32_t lcore_id; @@ -847,11 +847,19 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, * directly enqueued to the adapter and it would be * internally submitted to the eth device. */ - rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, - links[0].event_port_id, - &ev, /* events */ - 1, /* nb_events */ - 0 /* flags */); + nb_tx = rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + if (!nb_tx) + rte_pktmbuf_free(ev.mbuf); + } + + if (ev.u64) { + ev.op = RTE_EVENT_OP_RELEASE; + rte_event_enqueue_burst(links[0].eventdev_id, + links[0].event_port_id, &ev, 1); } } @@ -864,7 +872,7 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, uint8_t nb_links) { struct lcore_conf_ev_tx_int_port_wrkr lconf; - unsigned int nb_rx = 0; + unsigned int nb_rx = 0, nb_tx; struct rte_event ev; uint32_t lcore_id; int32_t socket_id; @@ -952,11 +960,19 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, * directly enqueued to the adapter and it would be * internally submitted to the eth device. */ - rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, - links[0].event_port_id, - &ev, /* events */ - 1, /* nb_events */ - 0 /* flags */); + nb_tx = rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + if (!nb_tx) + rte_pktmbuf_free(ev.mbuf); + } + + if (ev.u64) { + ev.op = RTE_EVENT_OP_RELEASE; + rte_event_enqueue_burst(links[0].eventdev_id, + links[0].event_port_id, &ev, 1); } } -- 2.25.1