From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C940A04B1; Thu, 24 Sep 2020 01:11:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5FADF1DB21; Thu, 24 Sep 2020 01:11:18 +0200 (CEST) Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on2065.outbound.protection.outlook.com [40.107.21.65]) by dpdk.org (Postfix) with ESMTP id C9A621DB1C for ; Thu, 24 Sep 2020 01:11:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yTwr7mQAdxrl7sREv6HPD5jazueFhAUAMG9CCMOhcHA=; b=yXMdvEEftQr9yfvVBmNAKOTmnkWMtpmfdYiXwy+4+2EvtrwTHXx8BRq6UuDvfteHWlLWkyVvvgiYLzpxKpnygdXPbTHMgJGVSf4i1v64fYWq78/eg+QEikkPyyWmteNyjyfBjaEWFkKDWIZSTjZrIxrfN4pKKNUsDaPi2FfV5vY= Received: from AM6PR04CA0057.eurprd04.prod.outlook.com (2603:10a6:20b:f0::34) by VE1PR08MB4992.eurprd08.prod.outlook.com (2603:10a6:803:10f::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.15; Wed, 23 Sep 2020 23:11:14 +0000 Received: from AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:f0:cafe::f6) by AM6PR04CA0057.outlook.office365.com (2603:10a6:20b:f0::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.15 via Frontend Transport; Wed, 23 Sep 2020 23:11:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dpdk.org; dmarc=bestguesspass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM5EUR03FT003.mail.protection.outlook.com (10.152.16.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.21 via Frontend Transport; Wed, 23 Sep 2020 23:11:14 +0000 Received: ("Tessian outbound 7161e0c2a082:v64"); Wed, 23 Sep 2020 23:11:14 +0000 X-CR-MTA-TID: 64aa7808 Received: from 104a14cabda8.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id B80C9C6C-119F-471D-9377-BFD5B42BA236.1; Wed, 23 Sep 2020 23:11:08 +0000 Received: from EUR05-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 104a14cabda8.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Wed, 23 Sep 2020 23:11:08 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ov7aliMxV+AwUInKBh/m47xnsAljqHssIWTpDu+mWisM0G8cka+g8vB9BDoRq0YvC1BWsOdzQiD/AJgZwmD+mdJu0Eq0zJSaK6HM11N7zQz25a/vn5QYsRZDdYrn7xGVNqB98PTFRLRFqHuMFpUCjYG2SH6+5e7eQIJ0XV4HNHXn0sddw+RE1VpK9RxcAtD42M4HJOmexX8fuNyauWVx0/w4KiLaPBaYMbqW8hdWwLJ0wmd1msUAdR16+iW2qfsY4hNzNn+4o5KfehfMwlPBrvck1vvswSL7WzSRiv853lF7b0/f3xdRojXAfm3GeGYnlxtDhZRbTiiUHaXKJCG2MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yTwr7mQAdxrl7sREv6HPD5jazueFhAUAMG9CCMOhcHA=; b=Po3Pq5IdOk8S8CNp8b4vzQFB7o3Co7TiF6UaHN1fCMT5/XLWIYIWhwFxjtSTFT/iuFSHVEJzGVovz701i4Xo9P0RT8aAc29+WgpkxVaWuADtqZS61JG7rWhZcnqS/3sYAVlqvoMxNzGBRQLFzqOSvqDfRoKA1wNZOyWingPLVtd0ckAjEQg5jnKdVlpuXyHwxgT86GnVL6Fq6qoruROKRvEnakiM47rTlXbiiOAJDtBcqCzuG6mWgTpJXW1yuHbe4VrPBZ0+kgkjFtm08V7LuLiw7GVlpGuVJXS0hw+VJdhHCigOj6HB1uQQaHYL2+6NeexJmpcoB2pmPOCAwvaTdQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yTwr7mQAdxrl7sREv6HPD5jazueFhAUAMG9CCMOhcHA=; b=yXMdvEEftQr9yfvVBmNAKOTmnkWMtpmfdYiXwy+4+2EvtrwTHXx8BRq6UuDvfteHWlLWkyVvvgiYLzpxKpnygdXPbTHMgJGVSf4i1v64fYWq78/eg+QEikkPyyWmteNyjyfBjaEWFkKDWIZSTjZrIxrfN4pKKNUsDaPi2FfV5vY= Received: from DBAPR08MB5814.eurprd08.prod.outlook.com (2603:10a6:10:1b1::6) by DB7PR08MB3612.eurprd08.prod.outlook.com (2603:10a6:10:4a::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.20; Wed, 23 Sep 2020 23:10:58 +0000 Received: from DBAPR08MB5814.eurprd08.prod.outlook.com ([fe80::7814:9c1:781f:475d]) by DBAPR08MB5814.eurprd08.prod.outlook.com ([fe80::7814:9c1:781f:475d%4]) with mapi id 15.20.3412.020; Wed, 23 Sep 2020 23:10:58 +0000 From: Honnappa Nagarahalli To: "Van Haaren, Harry" , "Nicolau, Radu" , "dev@dpdk.org" CC: "jerinj@marvell.com" , nd , "Ananyev, Konstantin" , Honnappa Nagarahalli , nd Thread-Topic: [PATCH v1] event/sw: performance improvements Thread-Index: AQHWhc4yw0MoIn/BrE2rarxyqK1mbql2JUUwgADKunA= Date: Wed, 23 Sep 2020 23:10:58 +0000 Message-ID: References: <20200908105211.10066-1-radu.nicolau@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: 0D860C3EE74BD243B6AD9BB65FDEE9E3.0 x-checkrecipientchecked: true Authentication-Results-Original: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=arm.com; x-originating-ip: [217.140.110.7] x-ms-publictraffictype: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 629036d6-7759-4d71-08aa-08d86015f7f7 x-ms-traffictypediagnostic: DB7PR08MB3612:|VE1PR08MB4992: x-ld-processed: f34e5979-57d9-4aaa-ad4d-b122a662184d,ExtAddr x-ms-exchange-transport-forked: True X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true nodisclaimer: true x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: rVnla7RpEEnONdo/gaEz0IXvqyu0nmpjg9dy3icIsPO9Is27zIaNwY4YfN4SMmYfvm4is6LwpKWhZEdv2UxJfuhIdU/0xayeo5AdouEWDxav31lP2CkMuIpHAND6azrBbJJCgG6KRwDkYJ4G97qMTAOpYpum+XWTdTQ0GJqHnuVnNrqglCebs5vlgVN+D1pkMUvyDEqxCpJdIiPTYBLV4RP94CbYPYvtpAkPcvW2W4zPoQzsWn3lOj9snfS0iHApuiiRdN6JqQqL6mfAHDMh6d4HjzNCNd40zj3d3x8+/zBE0CxBKMN/hVEsNgletmtAteko3hBpXbK3kVzUK1pVvQ== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DBAPR08MB5814.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(39860400002)(346002)(396003)(366004)(136003)(66556008)(2906002)(478600001)(64756008)(7696005)(9686003)(4326008)(66446008)(110136005)(66476007)(52536014)(8676002)(8936002)(76116006)(186003)(26005)(66946007)(316002)(71200400001)(6506007)(54906003)(55016002)(5660300002)(33656002)(86362001)(83380400001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: 2M8G9IF85suZdwnjHnvY23TNMNGy4RziHivcgzES0wNkyEfuCaT4DeOvU7zWe7DRZh/1qDZWvcvKI+g/7XWNVQuu2F523BoIKO7kBjI+eIdFq1nY0BtiZPKPl8pDAkAe08I0GUIbnCgNOn4RbJfgHtlAUe5Mvi1k/utdartfnqZHY2QXUpco7hb5xnXuE98/7y0Aq+mjzlcEtKHd8Aa6QrRxlCLn0yeC7SmbO4ygLQIahCO0G+YWqykqnDCODNQ0OMYMHL8v08w8bE42Xc+Ug9GsMN0L7uodpmVkUn6UBB6hZMzRNtL2FVQu8AqBqbkWLEuITMCUTxZhoSrQP31fMqncsP7TP1MWSiXgiLv8bzc9FG/aSj2lXb23miExrPCg9sg2G8KlEtlIol4M3XpEl0PnznvtMXgAnsTYATMYVO7HTmC8+K8SBvJlzTuHXoa4aEnX5sNyh2XKWGVOUqsKY8a0EMA+SyAzY891UoKSTnEtrJNrKoabT8luNcp3FgZYFh2Mv4MXT8laqMJoHqeD8IE2XfXiHVgAC3m6ytRX99iIHwJJ9lqX3Ys577vuGt0uCI3z+8QEP0WA/gHL0dCipOqPXgEnNYfXuLmcIoNNTKwo+AosDp6sTa+c6YMi7uH2iAnj/XKjCSO+ifqQhfSVpg== Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3612 Original-Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com X-MS-Office365-Filtering-Correlation-Id-Prvs: fdb13674-f53d-44fe-3cc8-08d86015ee9b X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: O1GGr3dwUqDW2waE/0RVhIw3+7oJsdd68Ha5WJxgnsPU+wtLUBqpMgWOWR6wyRms68ZLk31cwXAN7UwC/x8pPexvzup0rkvD9g+JDLJ41AV68O1UefLsSFnEzF31vCRygPoBHmZB0waZNjVhYEsIpBDbvEISnfs8AEdK1q+qTNkgC0aOlMIF9FoJiKziQS8RZanil3sXncnlu1+/iLSflI9dfO4cn+wqrwkmlARRiX9hWToWICYsBHogA+CViImZPS94ZUs4yHWGtPYR14O6ATbpyYiU6Zi/cDiwm3fmnCgCz9fGkUnnn03xN3svjATmFeg3pUfj4p0cJJ8RYkoSbE7Jo1qorSMrvI7Me6Ii0pNO/PoxwiXZ5AOLJJp3+LGLcNTloVnX4S6OwyleE0/M0w== X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(4636009)(136003)(39830400003)(396003)(376002)(346002)(46966005)(86362001)(8936002)(5660300002)(2906002)(52536014)(36906005)(55016002)(9686003)(7696005)(83380400001)(316002)(110136005)(6506007)(54906003)(47076004)(26005)(356005)(70206006)(478600001)(336012)(186003)(70586007)(82310400003)(8676002)(33656002)(4326008)(81166007); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2020 23:11:14.2957 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 629036d6-7759-4d71-08aa-08d86015f7f7 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4992 Subject: Re: [dpdk-dev] [PATCH v1] event/sw: performance improvements X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > > > > Add minimum burst throughout the scheduler pipeline and a flush counter= . > > Replace ring API calls with local single threaded implementation where > > possible. > > > > Signed-off-by: Radu Nicolau >=20 > Thanks for the patch, a few comments inline. >=20 > > --- > > drivers/event/sw/sw_evdev.h | 11 +++- > > drivers/event/sw/sw_evdev_scheduler.c | 83 > > +++++++++++++++++++++++---- > > 2 files changed, 81 insertions(+), 13 deletions(-) > > > > diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h > > index 7c77b2495..95e51065f 100644 > > --- a/drivers/event/sw/sw_evdev.h > > +++ b/drivers/event/sw/sw_evdev.h > > @@ -29,7 +29,13 @@ > > /* report dequeue burst sizes in buckets */ #define > > SW_DEQ_STAT_BUCKET_SHIFT 2 > > /* how many packets pulled from port by sched */ -#define > > SCHED_DEQUEUE_BURST_SIZE 32 > > +#define SCHED_DEQUEUE_BURST_SIZE 64 > > + > > +#define SCHED_MIN_BURST_SIZE 8 > > +#define SCHED_NO_ENQ_CYCLE_FLUSH 256 > > +/* set SCHED_DEQUEUE_BURST_SIZE to 64 or 128 when setting this to 1*/ > > +#define SCHED_REFILL_ONCE_PER_CALL 1 >=20 > Is it possible to make the above #define a runtime option? > Eg, --vdev event_sw,refill_iter=3D1 >=20 > That would allow packaged versions of DPDK to be usable in both modes. >=20 > > + > > > > #define SW_PORT_HIST_LIST (MAX_SW_PROD_Q_DEPTH) /* size of our > > history list */ #define NUM_SAMPLES 64 /* how many data points use > > for average stats */ @@ -214,6 +220,9 @@ struct sw_evdev { > > uint32_t xstats_count_mode_port; > > uint32_t xstats_count_mode_queue; > > > > + uint16_t sched_flush_count; > > + uint16_t sched_min_burst; > > + > > /* Contains all ports - load balanced and directed */ > > struct sw_port ports[SW_PORTS_MAX] __rte_cache_aligned; > > > > diff --git a/drivers/event/sw/sw_evdev_scheduler.c > > b/drivers/event/sw/sw_evdev_scheduler.c > > index cff747da8..ca6d1caff 100644 > > --- a/drivers/event/sw/sw_evdev_scheduler.c > > +++ b/drivers/event/sw/sw_evdev_scheduler.c > > @@ -26,6 +26,29 @@ > > /* use cheap bit mixing, we only need to lose a few bits */ #define > > SW_HASH_FLOWID(f) (((f) ^ (f >> 10)) & FLOWID_MASK) > > > > + > > +/* single object enq and deq for non MT ring */ static > > +__rte_always_inline void sw_nonmt_ring_dequeue(struct rte_ring *r, > > +void **obj) { > > + if ((r->prod.tail - r->cons.tail) < 1) > > + return; > > + void **ring =3D (void **)&r[1]; > > + *obj =3D ring[r->cons.tail & r->mask]; > > + r->cons.tail++; > > +} > > +static __rte_always_inline int > > +sw_nonmt_ring_enqueue(struct rte_ring *r, void *obj) { > > + if ((r->capacity + r->cons.tail - r->prod.tail) < 1) > > + return 0; > > + void **ring =3D (void **)&r[1]; > > + ring[r->prod.tail & r->mask] =3D obj; > > + r->prod.tail++; > > + return 1; > > + Why not make these APIs part of the rte_ring library? You could further opt= imize them by keeping the indices on the same cacheline. > > + > > + > > static inline uint32_t > > sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qi= d, > > uint32_t iq_num, unsigned int count) > > @@ -146,9 +169,9 @@ sw_schedule_parallel_to_cq(struct sw_evdev *sw, > struct > > sw_qid * const qid, > > cq_idx =3D 0; > > cq =3D qid->cq_map[cq_idx++]; > > > > - } while (rte_event_ring_free_count( > > - sw->ports[cq].cq_worker_ring) =3D=3D 0 || > > - sw->ports[cq].inflights =3D=3D > SW_PORT_HIST_LIST); > > + } while (sw->ports[cq].inflights =3D=3D SW_PORT_HIST_LIST || > > + rte_event_ring_free_count( > > + sw->ports[cq].cq_worker_ring) =3D=3D 0); > > > > struct sw_port *p =3D &sw->ports[cq]; > > if (sw->cq_ring_space[cq] =3D=3D 0 || > > @@ -164,7 +187,7 @@ sw_schedule_parallel_to_cq(struct sw_evdev *sw, > struct > > sw_qid * const qid, > > p->hist_list[head].qid =3D qid_id; > > > > if (keep_order) > > - rte_ring_sc_dequeue(qid->reorder_buffer_freelist, > > + sw_nonmt_ring_dequeue(qid->reorder_buffer_freelist, > > (void *)&p->hist_list[head].rob_entry); > > > > sw->ports[cq].cq_buf[sw->ports[cq].cq_buf_count++] =3D *qe; > > @@ -229,7 +252,7 @@ sw_schedule_qid_to_cq(struct sw_evdev *sw) > > uint32_t pkts_done =3D 0; > > uint32_t count =3D iq_count(&qid->iq[iq_num]); > > > > - if (count > 0) { > > + if (count >=3D sw->sched_min_burst) { > > if (type =3D=3D SW_SCHED_TYPE_DIRECT) > > pkts_done +=3D sw_schedule_dir_to_cq(sw, qid, > > iq_num, count); > > @@ -267,7 +290,7 @@ sw_schedule_reorder(struct sw_evdev *sw, int > qid_start, int > > qid_end) > > > > for (; qid_start < qid_end; qid_start++) { > > struct sw_qid *qid =3D &sw->qids[qid_start]; > > - int i, num_entries_in_use; > > + unsigned int i, num_entries_in_use; > > > > if (qid->type !=3D RTE_SCHED_TYPE_ORDERED) > > continue; > > @@ -275,6 +298,9 @@ sw_schedule_reorder(struct sw_evdev *sw, int > qid_start, int > > qid_end) > > num_entries_in_use =3D rte_ring_free_count( > > qid->reorder_buffer_freelist); > > > > + if (num_entries_in_use < sw->sched_min_burst) > > + num_entries_in_use =3D 0; > > + > > for (i =3D 0; i < num_entries_in_use; i++) { > > struct reorder_buffer_entry *entry; > > int j; > > @@ -320,7 +346,7 @@ sw_schedule_reorder(struct sw_evdev *sw, int > qid_start, int > > qid_end) > > if (!entry->ready) { > > entry->fragment_index =3D 0; > > > > - rte_ring_sp_enqueue( > > + sw_nonmt_ring_enqueue( > > qid->reorder_buffer_freelist, > > entry); > > > > @@ -349,9 +375,11 @@ __pull_port_lb(struct sw_evdev *sw, uint32_t port_= id, > int > > allow_reorder) > > uint32_t pkts_iter =3D 0; > > struct sw_port *port =3D &sw->ports[port_id]; > > > > +#if !SCHED_REFILL_ONCE_PER_CALL > > /* If shadow ring has 0 pkts, pull from worker ring */ > > if (port->pp_buf_count =3D=3D 0) > > sw_refill_pp_buf(sw, port); > > +#endif >=20 > As per above comment, this #if would become a runtime check. > Similar for the below #if comments. >=20 >=20 > > while (port->pp_buf_count) { > > const struct rte_event *qe =3D &port->pp_buf[port- > >pp_buf_start]; > > @@ -467,9 +495,11 @@ sw_schedule_pull_port_dir(struct sw_evdev *sw, > uint32_t > > port_id) > > uint32_t pkts_iter =3D 0; > > struct sw_port *port =3D &sw->ports[port_id]; > > > > +#if !SCHED_REFILL_ONCE_PER_CALL > > /* If shadow ring has 0 pkts, pull from worker ring */ > > if (port->pp_buf_count =3D=3D 0) > > sw_refill_pp_buf(sw, port); > > +#endif > > > > while (port->pp_buf_count) { > > const struct rte_event *qe =3D &port->pp_buf[port- > >pp_buf_start]; > > @@ -557,12 +587,41 @@ sw_event_schedule(struct rte_eventdev *dev) > > /* push all the internal buffered QEs in port->cq_ring to the > > * worker cores: aka, do the ring transfers batched. > > */ > > + int no_enq =3D 1; > > for (i =3D 0; i < sw->port_count; i++) { > > - struct rte_event_ring *worker =3D sw->ports[i].cq_worker_ring; > > - rte_event_ring_enqueue_burst(worker, sw->ports[i].cq_buf, > > - sw->ports[i].cq_buf_count, > > - &sw->cq_ring_space[i]); > > - sw->ports[i].cq_buf_count =3D 0; > > + struct sw_port *port =3D &sw->ports[i]; > > + struct rte_event_ring *worker =3D port->cq_worker_ring; > > + > > +#if SCHED_REFILL_ONCE_PER_CALL > > + /* If shadow ring has 0 pkts, pull from worker ring */ > > + if (port->pp_buf_count =3D=3D 0) > > + sw_refill_pp_buf(sw, port); > > +#endif > > + > > + if (port->cq_buf_count >=3D sw->sched_min_burst) { > > + rte_event_ring_enqueue_burst(worker, > > + port->cq_buf, > > + port->cq_buf_count, > > + &sw->cq_ring_space[i]); > > + port->cq_buf_count =3D 0; > > + no_enq =3D 0; > > + } else { > > + sw->cq_ring_space[i] =3D > > + rte_event_ring_free_count(worker) - > > + port->cq_buf_count; > > + } > > + } > > + > > + if (no_enq) { > > + if (unlikely(sw->sched_flush_count > > > SCHED_NO_ENQ_CYCLE_FLUSH)) > > + sw->sched_min_burst =3D 1; > > + else > > + sw->sched_flush_count++; > > + } else { > > + if (sw->sched_flush_count) > > + sw->sched_flush_count--; > > + else > > + sw->sched_min_burst =3D SCHED_MIN_BURST_SIZE; > > } > > > > } > > -- > > 2.17.1