From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 77371A0A01 for ; Tue, 5 Jan 2021 10:29:40 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E9B41607E3; Tue, 5 Jan 2021 10:29:40 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C3BE31607DC; Tue, 5 Jan 2021 10:29:36 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 1059Qk6F007162; Tue, 5 Jan 2021 01:29:29 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pfpt0220; bh=4T1G58bFFS1Jw6e5r+BWXma2pJnLoMr8j04hU/r2+kE=; b=hH+oLXQI+ScKgh6XK1do+ez6yb/RKTGVbJFFOFppDJNIOq0wohyk3Ni0qTT5Xl5hNO9D oBmeoTrJwXEe+6890KqlYDNmv6adg4hatRU3zDVbBHyBvszgMLx+VPLqt4WAgh3kOMYc k7BjHuOtw7wCwE2cQbUnED5I2lnoVPeHQRBM4ASxulRn+OiEc/G3bLC+YwCqxA1pKEqQ bkKtR3lsDD/FESau9nOTNJQUDS7z0MtwIfeekwMAyZuivbNBvYItQIuMCX/llnjCLv2+ GOE22TsAJUqTJH4vn/rreZf19Xrstu6U/td08zkS1QtTEphZRcJbYm6fbtVq9Ow3e2kw Ng== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35tq2ue1te-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 05 Jan 2021 01:29:28 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 5 Jan 2021 01:29:27 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 5 Jan 2021 01:29:26 -0800 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.176) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2 via Frontend Transport; Tue, 5 Jan 2021 01:29:26 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Co4FcZli4qv1ojmchYwNCdgz87U+FZbFQ0orsnP9mcOf6TjkNveGSOyMRyxqIQ3jB5jaqwLjT4PDNehumqjSWR+4CEgjVD91sMdwczYj3xUn3foOeVv+y1ES7SxXyJXXBNbCrAGvDvvHMflHaeRZ5yfdvKYxu74OiaHCgGBiCtKu7kI7afIYu+NYLaTjS1lVt3N4jca56f+7HgZcQH9Kchixi3tW12vnQcZ1A791X+IwAphhmmkzwug9vukiXxUlj4CMTwFUCKfoyI8fq2Riw6ZLvB1+KRrf/67E+q6ZfPvbunb3oAhg+lcnAbOK2K2bTMEIA6aIaz1/9xhxWel+ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4T1G58bFFS1Jw6e5r+BWXma2pJnLoMr8j04hU/r2+kE=; b=O7uOK6kna09HiF4wD5cIwpzk4hShoINpmz6yR4KsOw+DBbQ4/gGvIwKwroPruo4G5EpXSFiCZYuUeIKIcpMXmNL7A4730QB7pf3Px4nUIoK0N04FbF8APYK2Ul1qExJvQmNFXhY4csqsAWrYFj1SD/g8Vx7vTbDq7xM5f9iZ93MkHYQa7q5jnQbCbKfhDp0zyhm/36dczqmCHKiH9e0nPpdgbtVNUhxRqE2+mbi3aH1tCSnPTEA9gaSJN0LuCxSKoOw4ueDrSyXhp6xnKxZHVR5ANYnqgBcG2EpWFkQ1z4bjMCB2Ex8R+uxu1pP9/HSseLoiUIKKlx3vSeOZbJzjGA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4T1G58bFFS1Jw6e5r+BWXma2pJnLoMr8j04hU/r2+kE=; b=WShAFBOlevKP4YHRQ1lYJcNeOA6Pxbgq6on5Ayqc/soRULAEksZVh91fCO1QTtPpH9GBFefJ4GJQPYV/bRXt1ArZ5BXxw7Kk64ThoZJSrngNYANWSMW5E+pr7XhFkoZci39sCKr7PzjrJcvWWrB3KE0e+Xf+T9cL39PC9V+CBho= Received: from CO6PR18MB3828.namprd18.prod.outlook.com (2603:10b6:5:346::20) by MWHPR1801MB1950.namprd18.prod.outlook.com (2603:10b6:301:69::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3721.20; Tue, 5 Jan 2021 09:29:25 +0000 Received: from CO6PR18MB3828.namprd18.prod.outlook.com ([fe80::5c2c:a13a:4f3d:94df]) by CO6PR18MB3828.namprd18.prod.outlook.com ([fe80::5c2c:a13a:4f3d:94df%5]) with mapi id 15.20.3721.024; Tue, 5 Jan 2021 09:29:25 +0000 From: Pavan Nikhilesh Bhagavatula To: Feifei Wang , Jerin Jacob Kollanukkaran , Harry van Haaren CC: "dev@dpdk.org" , nd , Honnappa Nagarahalli , "stable@dpdk.org" , "Ruifeng Wang" , nd Thread-Topic: [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test Thread-Index: AdbYTEUe8OVHSkoHRIaSsOPrfVwqLgK45xmgAAQ//VA= Date: Tue, 5 Jan 2021 09:29:25 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-IN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: arm.com; dkim=none (message not signed) header.d=none;arm.com; dmarc=none action=none header.from=marvell.com; x-originating-ip: [2405:201:d009:380f:dd34:bb91:5919:5edb] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 57131723-397b-4568-a336-08d8b15c6496 x-ms-traffictypediagnostic: MWHPR1801MB1950: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:7691; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: PZsgfbf8W3Geq43tnl3Q0b5uQ7xIm2DB3TTYvJ38in39R+iC0sEL24gaVD/E7HwY4LEbmz6ktl9GSHFRWztQiLsIvLUQ9KEeQaeuBKx7wZNeyiSrMnX+gHihuMBtmzQiBWRdKsJNKJDiujYbgdGWg6+IKs+d/pBSYCQZbezSCICKrHBC4lgmBhT+IaQTfEQSjU6hBACofTpqTduyUiRGkWdeL0Q+4nG+oTeg8/WN/HEluuOKhTx2jB4dFugf174Xjaftgw7B5dJ8H2oyw9GRHIXWVkv3sAw1mXXXGDAO2szV+a5bqZ9uU0enueJXGHbU8m3gRJJkcbbDXZSKDNJ3/Hu2gqx8wzxE8UC19VhQA9Sb0jM5G3CmXUy3BxfZz2X5iII7lsHbyEEcgH8XGxzAxw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CO6PR18MB3828.namprd18.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(2906002)(33656002)(110136005)(54906003)(6506007)(53546011)(8936002)(83380400001)(498600001)(66556008)(9686003)(186003)(64756008)(86362001)(66446008)(55016002)(66476007)(4326008)(71200400001)(52536014)(66946007)(8676002)(5660300002)(7696005)(76116006); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?iso-2022-jp?B?b21EWFFlQjdmT1lYNXNpN2dJOFQ1dk9mQmw1UWlqV05aSjNKMmdTR1lk?= =?iso-2022-jp?B?VHB2R01UK3B1alRvWm1BaExzblNzTXdVc3RUUFFUMDZpU0h2MU9IZkNZ?= =?iso-2022-jp?B?VlNmTUZjMWlJd1hMSWlTSVUwOEt1K3prMEJGRXR3RnhpbjR1bnRQSzhB?= =?iso-2022-jp?B?WDNSOUY2NFo4cWo5QjF5eFRJYmk3UE95MHJTc2x4VUJ3VU1DLyt4SU9B?= =?iso-2022-jp?B?T0J0UHcxZXprUGVjL1FuZE5QanZXall0dmpQb09UNktzUTZOT1UwVHRz?= =?iso-2022-jp?B?eU9Rdmx5TmpLODhJL2ZkNG92R3hmUmFpUzJTcGgzL1R3RExaSDU0ZzNL?= =?iso-2022-jp?B?am9nRTRTU1pDRG9iWEtaV3M5eE9iNG5Dek5icWYrektERzRvMFZEMGR4?= =?iso-2022-jp?B?QmZmdXkxeFFuTUxpZjNOQ0ZUd3lxRU4vWGpqTTAyWXVKcXFLRElhMkd3?= =?iso-2022-jp?B?SEJqa2hTaWc0Yzd5clJieDBLN2dQK08xem00aVN1emZBdkpaeE94VW5l?= =?iso-2022-jp?B?VEt5Q2F5RHNNSHRDbTZaK0l1THRMVjJQZWlnTGw5YU5RM0N6Q1FjcmJZ?= =?iso-2022-jp?B?YVVwT2lZclpsUUJDSzJiemlmSDFKc2pYOVNQZlJzdXo0d2RuV2pjUVA2?= =?iso-2022-jp?B?V0ZBMW1iNDNnQTVtNHE5QzE2Wm1YNUl1Q2VFcDZiNFpGTTFpRTN4ekdM?= =?iso-2022-jp?B?L0wrbWJLT1RlMjVWWWRpRmRsV2RNckpZNXlXMFpkakVmRFdDL1pvZjNp?= =?iso-2022-jp?B?bGEyKzh4NHdqU1hzWEVkaWVXNEx0NWZBOWtiaWExWmltaGkrdGlpeWFE?= =?iso-2022-jp?B?OElOMEFQY1pNNDVzTVZXNmJFQkJMOFlsdVlndkI4NW9WNW9sYUwvdDQx?= =?iso-2022-jp?B?alBqRGRYRzEyUlRkMTN0RjFTQTNtc1BYYmJvZUFpTVVzOUpTd3pkYUZU?= =?iso-2022-jp?B?WkJxVlgyQWdJR1o5L3hlYVBwdUhJUDV1OTRuc3JmRGhNYU1NZi9HcXFR?= =?iso-2022-jp?B?TkR2dlhlWTdqTE9MNXlkWjhldXhMcWxmbEtMbWV2dzVQcGl0U3NrNVhD?= =?iso-2022-jp?B?ZWdXdXVpUTlEdUxLcGxCbFByRWNTVlZHREdPRlZkZmlka2FTOEVFRVh3?= =?iso-2022-jp?B?clg0czcrRUxoWm9uMFNTVWs4OGY3L1lNdGo0cjJtekh4WnFKM29yRHZu?= =?iso-2022-jp?B?SFJqRFZyZnNpSk9vVVpHaXd5emswc2RZTVd3by9SOEpidExlKzg4djNH?= =?iso-2022-jp?B?blVmQUEwWnNEMzZRTGloQnRNRXRuMkVYaXRjeEJzaitXbktVd0dZZXo4?= =?iso-2022-jp?B?UDZsZ0ZJazRXbFVnNDR6UkNtSkJ0dlpvM1J5dDg3Y2hkOWhRTHQrcGFp?= =?iso-2022-jp?B?Q0hkOVAvS2lYOHZkZWxMazE3N01xVHlLUXRSOWV6OE4vVk5rbVcyaGFE?= =?iso-2022-jp?B?ekZJeHZwMUU3NzBhWGJOVkRKSkJqYlIxeUtpWnF5SU8xY25RY0pBSVZ1?= =?iso-2022-jp?B?QmFoTVMyTks3UU1iMjhRN009?= Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: CO6PR18MB3828.namprd18.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 57131723-397b-4568-a336-08d8b15c6496 X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2021 09:29:25.3625 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: h86FJKKL1bhVG6DcDATl3WJZZMGrxCVELRpzRsJ70X4dpjAFFObxruy8+iA9FHhYDMgZqRHsRbjLc7Dp3bieKyMubHNlIDNemy7kEIrNrrw= X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1801MB1950 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2021-01-05_01:2021-01-05, 2021-01-05 signatures=0 Subject: Re: [dpdk-stable] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi Feifei, >Hi, Pavan > >Sorry for my late reply and thanks very much for your review. > >> -----Original Message----- >> From: Pavan Nikhilesh Bhagavatula >> Sent: 2020=1B$BG/=1B(B12=1B$B7n=1B(B22=1B$BF|=1B(B 18:33 >> To: Feifei Wang ; jerinj@marvell.com; >Harry van >> Haaren ; Pavan Nikhilesh >> >> Cc: dev@dpdk.org; nd ; Honnappa Nagarahalli >> ; stable@dpdk.org; Phil Yang >> >> Subject: RE: [RFC PATCH v1 4/6] app/eventdev: add release barriers >for >> pipeline test >> >> >> >Add release barriers before updating the processed packets for >worker >> >lcores to ensure the worker lcore has really finished data processing >> >and then it can update the processed packets number. >> > >> >> I believe we can live with minor inaccuracies in stats being presented >as >> atomics are pretty heavy when scheduler is limited to burst size as 1. >> >> One option is to move it before a pipeline operation >(pipeline_event_tx, >> pipeline_fwd_event etc.) as they imply implicit release barrier (as all >the >> changes done to the event should be visible to the next core). > >If I understand correctly, your meaning is that move release barriers >before >pipeline_event_tx or pipeline_fwd_event. This can ensure the event has >been >processed before the next core begins to tx/fwd. For example: What I meant was event APIs such as `rte_event_enqueue_burst`, `rte_event_e= th_tx_adapter_enqueue` act as an implicit release barrier and the API `rte_event_dequeue_burst` ac= t as an implicit acquire barrier. Since, pipeline_* test starts with a dequeue() and ends with an enqueue() I= don=1B$B!G=1B(Bt believe we need barriers in=20 Between. > >if (ev.sched_type =3D=3D RTE_SCHED_TYPE_ATOMIC) { > + __atomic_thread_fence(__ATOMIC_RELEASE); > pipeline_event_tx(dev, port, &ev); > w->processed_pkts++; > } else { > ev.queue_id++; > + __atomic_thread_fence(__ATOMIC_RELEASE); > pipeline_fwd_event(&ev, >RTE_SCHED_TYPE_ATOMIC); > pipeline_event_enqueue(dev, port, &ev); > >However, there are two reasons to prevent this: > >First, compare with other tests in app/eventdev, for example, the >eventdev perf test, >the wmb is after event operation to ensure operation has been finished >and then w->processed_pkts++. In case of perf_* tests start with a dequeue() and finally ends with a memp= ool_put() should also act as implicit acquire release pairs making stats consistent? >So, if we move release barriers before tx/fwd, it may cause that the >tests of app/eventdev >become inconsistent.This may reduce the maintainability of the code >and make it difficult to understand. > >Second, it is a test case, though heavy thread may cause performance >degradation, it can ensure that >the operation process and the test result are correct. And maybe for a >test case, correctness is more important >than performance. > Most of our internal perf test run on 24/48 core combinations and since=20 Octeontx2 event device driver supports a burst size of 1, it will show up a= s Huge performance degradation. >So, due to two reasons above, I'm ambivalent about how we should do >in the next step. > >Best Regards >Feifei Regards, Pavan. > >> >Fixes: 314bcf58ca8f ("app/eventdev: add pipeline queue worker >> >functions") >> >Cc: pbhagavatula@marvell.com >> >Cc: stable@dpdk.org >> > >> >Signed-off-by: Phil Yang >> >Signed-off-by: Feifei Wang >> >Reviewed-by: Ruifeng Wang >> >--- >> > app/test-eventdev/test_pipeline_queue.c | 64 >> >+++++++++++++++++++++---- >> > 1 file changed, 56 insertions(+), 8 deletions(-) >> > >> >diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test- >> >eventdev/test_pipeline_queue.c index 7bebac34f..0c0ec0ceb >100644 >> >--- a/app/test-eventdev/test_pipeline_queue.c >> >+++ b/app/test-eventdev/test_pipeline_queue.c >> >@@ -30,7 +30,13 @@ pipeline_queue_worker_single_stage_tx(void >> >*arg) >> > >> > if (ev.sched_type =3D=3D RTE_SCHED_TYPE_ATOMIC) { >> > pipeline_event_tx(dev, port, &ev); >> >- w->processed_pkts++; >> >+ >> >+ /* release barrier here ensures stored operation >> >+ * of the event completes before the number of >> >+ * processed pkts is visible to the main core >> >+ */ >> >+ __atomic_fetch_add(&(w->processed_pkts), 1, >> >+ __ATOMIC_RELEASE); >> > } else { >> > ev.queue_id++; >> > pipeline_fwd_event(&ev, >> >RTE_SCHED_TYPE_ATOMIC); >> >@@ -59,7 +65,13 @@ >pipeline_queue_worker_single_stage_fwd(void >> >*arg) >> > rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0); >> > pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC); >> > pipeline_event_enqueue(dev, port, &ev); >> >- w->processed_pkts++; >> >+ >> >+ /* release barrier here ensures stored operation >> >+ * of the event completes before the number of >> >+ * processed pkts is visible to the main core >> >+ */ >> >+ __atomic_fetch_add(&(w->processed_pkts), 1, >> >+ __ATOMIC_RELEASE); >> > } >> > >> > return 0; >> >@@ -84,7 +96,13 @@ >> >pipeline_queue_worker_single_stage_burst_tx(void *arg) >> > if (ev[i].sched_type =3D=3D >> >RTE_SCHED_TYPE_ATOMIC) { >> > pipeline_event_tx(dev, port, &ev[i]); >> > ev[i].op =3D RTE_EVENT_OP_RELEASE; >> >- w->processed_pkts++; >> >+ >> >+ /* release barrier here ensures stored >> >operation >> >+ * of the event completes before the >> >number of >> >+ * processed pkts is visible to the main >> >core >> >+ */ >> >+ __atomic_fetch_add(&(w- >> >>processed_pkts), 1, >> >+ __ATOMIC_RELEASE); >> > } else { >> > ev[i].queue_id++; >> > pipeline_fwd_event(&ev[i], >> >@@ -121,7 +139,13 @@ >> >pipeline_queue_worker_single_stage_burst_fwd(void *arg) >> > } >> > >> > pipeline_event_enqueue_burst(dev, port, ev, nb_rx); >> >- w->processed_pkts +=3D nb_rx; >> >+ >> >+ /* release barrier here ensures stored operation >> >+ * of the event completes before the number of >> >+ * processed pkts is visible to the main core >> >+ */ >> >+ __atomic_fetch_add(&(w->processed_pkts), nb_rx, >> >+ __ATOMIC_RELEASE); >> > } >> > >> > return 0; >> >@@ -146,7 +170,13 @@ >pipeline_queue_worker_multi_stage_tx(void >> >*arg) >> > >> > if (ev.queue_id =3D=3D tx_queue[ev.mbuf->port]) { >> > pipeline_event_tx(dev, port, &ev); >> >- w->processed_pkts++; >> >+ >> >+ /* release barrier here ensures stored operation >> >+ * of the event completes before the number of >> >+ * processed pkts is visible to the main core >> >+ */ >> >+ __atomic_fetch_add(&(w->processed_pkts), 1, >> >+ __ATOMIC_RELEASE); >> > continue; >> > } >> > >> >@@ -180,7 +210,13 @@ >> >pipeline_queue_worker_multi_stage_fwd(void *arg) >> > ev.queue_id =3D tx_queue[ev.mbuf->port]; >> > rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0); >> > pipeline_fwd_event(&ev, >> >RTE_SCHED_TYPE_ATOMIC); >> >- w->processed_pkts++; >> >+ >> >+ /* release barrier here ensures stored operation >> >+ * of the event completes before the number of >> >+ * processed pkts is visible to the main core >> >+ */ >> >+ __atomic_fetch_add(&(w->processed_pkts), 1, >> >+ __ATOMIC_RELEASE); >> > } else { >> > ev.queue_id++; >> > pipeline_fwd_event(&ev, >> >sched_type_list[cq_id]); >> >@@ -214,7 +250,13 @@ >> >pipeline_queue_worker_multi_stage_burst_tx(void *arg) >> > if (ev[i].queue_id =3D=3D tx_queue[ev[i].mbuf- >> >>port]) { >> > pipeline_event_tx(dev, port, &ev[i]); >> > ev[i].op =3D RTE_EVENT_OP_RELEASE; >> >- w->processed_pkts++; >> >+ >> >+ /* release barrier here ensures stored >> >operation >> >+ * of the event completes before the >> >number of >> >+ * processed pkts is visible to the main >> >core >> >+ */ >> >+ __atomic_fetch_add(&(w- >> >>processed_pkts), 1, >> >+ __ATOMIC_RELEASE); >> > continue; >> > } >> > >> >@@ -254,7 +296,13 @@ >> >pipeline_queue_worker_multi_stage_burst_fwd(void *arg) >> > >> > rte_event_eth_tx_adapter_txq_set(ev[i].mbuf, 0); >> > pipeline_fwd_event(&ev[i], >> > >> > RTE_SCHED_TYPE_ATOMIC); >> >- w->processed_pkts++; >> >+ >> >+ /* release barrier here ensures stored >> >operation >> >+ * of the event completes before the >> >number of >> >+ * processed pkts is visible to the main >> >core >> >+ */ >> >+ __atomic_fetch_add(&(w- >> >>processed_pkts), 1, >> >+ __ATOMIC_RELEASE); >> > } else { >> > ev[i].queue_id++; >> > pipeline_fwd_event(&ev[i], >> >-- >> >2.17.1