From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31550A050D; Tue, 26 Apr 2022 23:14:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DF5904282A; Tue, 26 Apr 2022 23:14:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 52D854281D for ; Tue, 26 Apr 2022 23:14:31 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23QFdmCJ011844; Tue, 26 Apr 2022 14:14:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Id8j8JAaN3v7roEnzdBMUJn6a+m/KzNMcGlYq4dQUQw=; b=e1i7pbhuAQjgFZHzi20x1bNUnM7dh+u08Y1BXtcSA6KwwxWeSJMq+CBoA4e+i0P4Ocu2 MRrOevBWzwOJtA7bpBsFxq42Pn53IC8I3Gw2jGfQayZ5Wvg3chHaNfiBlWBfBD9Xp3sG AYaSP1oK+yJIcKK5qKKHDtKNJPmkQePeJynVLWYywbDUiOpEuZqnW+m0qGjqmN4l+qTh pqbTqGNsynD92fFBPdJ1YPyeFbWUS7F7AiX0VXtmm2rdw+qetg4x+A2Ihd01tsr4L3hW BjKmicz0zIkYjAnYdyMerx66BD4aLqnC6TJsnPSjnF/woGWMg8TNHlDt1Mz4MyaR9nga cw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3fpaxv3fsj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 26 Apr 2022 14:14:30 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 26 Apr 2022 14:14:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 26 Apr 2022 14:14:29 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.69.50]) by maili.marvell.com (Postfix) with ESMTP id 06C8B3F7094; Tue, 26 Apr 2022 14:14:26 -0700 (PDT) From: Pavan Nikhilesh To: , Radu Nicolau , Akhil Goyal CC: , Pavan Nikhilesh Subject: [PATCH 6/6] examples/ipsec-secgw: cleanup worker state before exit Date: Wed, 27 Apr 2022 02:44:12 +0530 Message-ID: <20220426211412.6138-6-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220426211412.6138-1-pbhagavatula@marvell.com> References: <20220426211412.6138-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: 2FvgM8fqfo1IwQWJLKVK1BttHwZTP9Z_ X-Proofpoint-GUID: 2FvgM8fqfo1IwQWJLKVK1BttHwZTP9Z_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-26_06,2022-04-26_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Event ports are configured to implicitly release the scheduler contexts currently held in the next call to rte_event_dequeue_burst(). A worker core might still hold a scheduling context during exit as the next call to rte_event_dequeue_burst() is never made. This might lead to deadlock based on the worker exit timing and when there are very less number of flows. Add a cleanup function to release any scheduling contexts held by the worker by using RTE_EVENT_OP_RELEASE. Signed-off-by: Pavan Nikhilesh --- examples/ipsec-secgw/ipsec_worker.c | 40 ++++++++++++++++++++--------- 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 8639426c5c..3df5acf384 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -749,7 +749,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, uint8_t nb_links) { struct port_drv_mode_data data[RTE_MAX_ETHPORTS]; - unsigned int nb_rx = 0; + unsigned int nb_rx = 0, nb_tx; struct rte_mbuf *pkt; struct rte_event ev; uint32_t lcore_id; @@ -847,11 +847,19 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, * directly enqueued to the adapter and it would be * internally submitted to the eth device. */ - rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, - links[0].event_port_id, - &ev, /* events */ - 1, /* nb_events */ - 0 /* flags */); + nb_tx = rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + if (!nb_tx) + rte_pktmbuf_free(ev.mbuf); + } + + if (ev.u64) { + ev.op = RTE_EVENT_OP_RELEASE; + rte_event_enqueue_burst(links[0].eventdev_id, + links[0].event_port_id, &ev, 1); } } @@ -864,7 +872,7 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, uint8_t nb_links) { struct lcore_conf_ev_tx_int_port_wrkr lconf; - unsigned int nb_rx = 0; + unsigned int nb_rx = 0, nb_tx; struct rte_event ev; uint32_t lcore_id; int32_t socket_id; @@ -952,11 +960,19 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, * directly enqueued to the adapter and it would be * internally submitted to the eth device. */ - rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, - links[0].event_port_id, - &ev, /* events */ - 1, /* nb_events */ - 0 /* flags */); + nb_tx = rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + if (!nb_tx) + rte_pktmbuf_free(ev.mbuf); + } + + if (ev.u64) { + ev.op = RTE_EVENT_OP_RELEASE; + rte_event_enqueue_burst(links[0].eventdev_id, + links[0].event_port_id, &ev, 1); } } -- 2.25.1