From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 054FEA0C41; Wed, 6 Oct 2021 11:11:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA3DC41109; Wed, 6 Oct 2021 11:11:54 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id EAA6C41101 for ; Wed, 6 Oct 2021 11:11:51 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10128"; a="312157637" X-IronPort-AV: E=Sophos;i="5.85,350,1624345200"; d="scan'208";a="312157637" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Oct 2021 02:11:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,350,1624345200"; d="scan'208";a="522164954" Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16]) by orsmga001.jf.intel.com with ESMTP; 06 Oct 2021 02:11:44 -0700 Received: from orsmsx606.amr.corp.intel.com (10.22.229.19) by ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Wed, 6 Oct 2021 02:11:44 -0700 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12 via Frontend Transport; Wed, 6 Oct 2021 02:11:44 -0700 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.109) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.12; Wed, 6 Oct 2021 02:11:44 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Xive4VWfxEQfGmn46TqPr9mARUrOnIo/gmHCQh1EYHh9Be6T1p6qK1R53P5sgBR1pIIEiKigD9mnjY/LmhStKcDozODQJol5X3LR3zaK5LNWNjAmaWS/uZ3Uvm4HpVuoq+MhqaGzHXBuPT8FxXi9Re5NHJ1gmazvT/BjANaiSkAOZFEMKR7mfyXZvviBhPGG7iObC8gC/pHK3GFlKFlk3nhY9pSscZ/LEl1p23GWJs7Rh+7JAcob5RnMD1VYrRBoaypb1E7BdZhy1XXBMVZ8km59I8sJYaZ6SHTe6ne1foPf77M05YZq6YQENKLgxHSpWhKYcKMcw/9jo5E6+qvEbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yaWfLtcrPaHQ4Z18yGiKwul/3C3PMbqaYnIN9bjbCMU=; b=OQ6PLUIv8wfcq/UzSa7F7lXwqMR6JeszE55JQq/YiShWyAxSaSuZK7B5imOuCy4cgYA7G/xNSHPdf+tUSLjoLP1Tj8wyb5upLzsAYrbZh8iYLTRao6z0ogxhSAS8vZcXockYKhZwR/1E+aS6XyDPnwI9nDR+XWVzCT3M8bZ+4lIkdIAYqmyi75eGQun67NPlqCfh0mLkll84FqwJiRrvGT22dywumNoATMDXjr4d1cFHk5G9Ga8OPv/fO9NS0UVXuj/TpnXUf6h8Evi+rcwrVc4hnW1ko7nzK7hr+/gKgyk9DRLuiJZE4aquegVxHhr79Wl5uDsglfIAe+nzu3IljA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yaWfLtcrPaHQ4Z18yGiKwul/3C3PMbqaYnIN9bjbCMU=; b=sNEioQZ7BKbWfLwFfYGDdqqebIuEW+1eYFdP+YJozYTPZFKb/5KQElS8zBRyzdicrr1VG8ZUygVlHoKuKqU1f/6QYKp0gvVyqIZNgkO0nBBjMb6YF7D+rGdCWck3xrqX+d1wC5p5EyLMlEsQ9RhUKM27imqfV/OcpD3M5yNpITI= Received: from MWHPR1101MB2253.namprd11.prod.outlook.com (2603:10b6:301:52::17) by MWHPR11MB1808.namprd11.prod.outlook.com (2603:10b6:300:10f::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.19; Wed, 6 Oct 2021 09:11:43 +0000 Received: from MWHPR1101MB2253.namprd11.prod.outlook.com ([fe80::d56:f55:5cb9:1b28]) by MWHPR1101MB2253.namprd11.prod.outlook.com ([fe80::d56:f55:5cb9:1b28%3]) with mapi id 15.20.4587.019; Wed, 6 Oct 2021 09:11:43 +0000 From: "Jayatheerthan, Jay" To: "Naga Harish K, S V" , "jerinj@marvell.com" CC: "dev@dpdk.org" Thread-Topic: [PATCH v8 4/5] eventdev/rx_adapter: implement per queue event buffer Thread-Index: AQHXuofOnRxhPp6/gU6fwN6bCLEfb6vFrv/A Date: Wed, 6 Oct 2021 09:11:42 +0000 Message-ID: References: <20211006040256.1443140-1-s.v.naga.harish.k@intel.com> <20211006075548.1548361-1-s.v.naga.harish.k@intel.com> <20211006075548.1548361-4-s.v.naga.harish.k@intel.com> In-Reply-To: <20211006075548.1548361-4-s.v.naga.harish.k@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: f345ff55-f3a2-4586-2114-08d988a95090 x-ms-traffictypediagnostic: MWHPR11MB1808: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2276; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: VKyhnZcWSboCQvu0FJ5iEoTEBL16xtOkrr/+0lpFVd6jTwi/bX3KZPFm/QniX52izjauMX86llPj+x3VnqEDmezCW1xaNBWDs5YS3tQ9PBm5Jzh36D5mLsMDs9SpoHtrTbTux252OkwCJJ/qqIvtVcDP5nH/Tlbtpbx+uarG5rD3nWWvDtYbLjkygGpD78y2LDTwuBFG0etzIpop0l4VnSqK/x8/xTKrxa4VHACfmjWqIINYcIp1RTWGa1FgQkwlQ24wAp77QqePQpoyFITNm+Wkd/Si6JvI0VfwaQeD/ihuB2/EJEdxm4TRpxd78FAZWSrTAwZZysbkmFzeZ1M29ZLWJT765svXW1c30xP1girsFlynblVKmfQFmGgUPhaS94bpKj/SmLXGw14ywWLw6OkYuGvONClAaHHKzqQWKBGRKkor8+kXaSYdiYCikMmg5n3aInrlf55xPXqLet8Ix/r3u2XZiqemiwk/9DFAQ+kn+9n6fj8RuunI/PU74yZXhKRnemu3GwLuN9C7bA1hVxaauo7Jr7CKpwREEylkT5KG17k72w/PpuBVt+BZgSY9n/DRWFb1vSO/CQgeG428W1ec+wP5jQecZGyGkbNJOn1hvoZVCxfIBy1GNZHibNOJYE2vTFWYWCRVG4NLMiXydJK2hJ9LFoFvRSkgaVD+K45WHlH/6EvnmtNHod40c8iLt0CWv9kIqaSIkt3r/APt1g== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MWHPR1101MB2253.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(5660300002)(186003)(53546011)(6506007)(2906002)(83380400001)(30864003)(33656002)(26005)(508600001)(66476007)(66556008)(64756008)(110136005)(86362001)(55016002)(66446008)(4326008)(8676002)(76116006)(7696005)(8936002)(71200400001)(9686003)(66946007)(122000001)(316002)(38100700002)(52536014)(38070700005)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?POTg7aaDNWRoNDWSvBT0gXY/h6y+8kn0SZAOv4JpApr9ouPKFOU1qiJ01FKR?= =?us-ascii?Q?Mr/RCoBA+O3WBzXtR6vC4H+dnZd8ZUcZZXUjrXBt95XA8I0wleDatgHyuhyN?= =?us-ascii?Q?Fz5NCXnuEBC7H3c2NxHbk5iRZ7qJWZtqxJwbduShplxVThV0yEIW3rPidxGj?= =?us-ascii?Q?f3QgXMkAGVZDFM+xQqM21dk97R5FhF3my40Iq/KHPJRSufmYI//L3JVDVXVW?= =?us-ascii?Q?nSIrcOtlOwmH1AlVbiK0AaZ20lUujo04Oup2Sn2gPxJlTesG+TpbkTiTDX2b?= =?us-ascii?Q?lbmM50OWOWKyzpK+nu5KVqUPXw31rMAwaCwCoX91FMyjEOp59JDGpy32nfnn?= =?us-ascii?Q?Ut7Md6jOatGDdZ+PGiRWq+6FW1I5LbFSVLqfyhtpXUSVdEXYVjCWP1hM3TXw?= =?us-ascii?Q?gireW8F46VXe/nCSGlAuZxndNMUTKgOVLmjXqwXA4WizBE4H+nIinjpa3vka?= =?us-ascii?Q?9fswNbIAPg7GGAvovhDKJ+erjtRzEOQugW4aeFeblxW809HZ3Y4Bk+bZQ/vn?= =?us-ascii?Q?Nqgfk9KpJZp23PbMgDLJSrZaT3worylwVb1xP7FJWe0FJ1IFUJcANL+9n0KA?= =?us-ascii?Q?o5MRitpNQO8q3i34wI9Ilm5QFXfg4aBw6Dq0HAP4RBEVoaEhaxGemLk9Vp+k?= =?us-ascii?Q?p9pLXnTp7A8Rq/EKbt8sWUwaMaRfFv+Nbq/wYtzAZDKgAw4eb8c7RvBdhgWT?= =?us-ascii?Q?6PNOUPVtR9YP+XzD6uutTGbav01q/bXvAYKHcDOG/ZebYXSHlEQq5msviNvO?= =?us-ascii?Q?1OiwmO4+aun6DVPoHrI9tcwfmoarfZZB+gd0oZhYq7TMKu/aB0SMnbq7mSFi?= =?us-ascii?Q?4LMpqsw16+8i86Q84DaHVrBQ4CHKq2LR6F5TwqxeCsbeAmvPZ6apCuBH/Mx5?= =?us-ascii?Q?9S9OdaaMFv5JQXuwrKv4NXqXj3VbEGmTWZ9QRDKiIMCKGGhUEtokjCzsriHp?= =?us-ascii?Q?FaMMaO3S43iM8UKLK6AAh51/C09TZTsX1M7fRfTsPwtKG6NM3xL59yjwgFPr?= =?us-ascii?Q?Yhhoa3QWotpvbUGyw7yMXhsCrBRDpAhmC09JwNSSp4+S1j6nd9hxuWTuV8IO?= =?us-ascii?Q?xpxAYraA7f9atFVzzDmiy5vciIT3KrQ+8LesGiCFWvxDfR/H+xF+zpmklW2n?= =?us-ascii?Q?OhSzEsvi4NiqEOj3Ny819QpeCZvelUIZEf7wMgSbHAihr6p/3qwPTbQdifYo?= =?us-ascii?Q?FmxHeKen3zKw+dHlbIVUjCaCnpILLzKt2y0kyy5mp2Z5r3QlJJIHoC0aiQPr?= =?us-ascii?Q?bMd9C5epCI3c0l8SXZo5S4XVERm+qb8KwFjKe0WsDEb9GgENxOrXAOVNEvJD?= =?us-ascii?Q?wYk2iBEavVMJsX6V7hZ8c+lz?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MWHPR1101MB2253.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: f345ff55-f3a2-4586-2114-08d988a95090 X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Oct 2021 09:11:42.8810 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: XH5wGTkX2z8O9J9IKNxzeydhXPDZYPbH15gnD8poNx4D6ZaD0DqKPnwyhxWJ2GoLpdc5NhCS5ftGCtk4b3bRUnsS40Xz3xVxxj2/izc8EpE= X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1808 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v8 4/5] eventdev/rx_adapter: implement per queue event buffer X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The full patchset looks good to me. Acked-by: Jay Jayatheerthan > -----Original Message----- > From: Naga Harish K, S V > Sent: Wednesday, October 6, 2021 1:26 PM > To: jerinj@marvell.com; Jayatheerthan, Jay > Cc: dev@dpdk.org > Subject: [PATCH v8 4/5] eventdev/rx_adapter: implement per queue event bu= ffer >=20 > this patch implement the per queue event buffer with > required validations. >=20 > Signed-off-by: Naga Harish K S V > --- > lib/eventdev/rte_event_eth_rx_adapter.c | 211 +++++++++++++++++------- > 1 file changed, 153 insertions(+), 58 deletions(-) >=20 > diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_e= vent_eth_rx_adapter.c > index 5ccea168ea..c5c9c26ded 100644 > --- a/lib/eventdev/rte_event_eth_rx_adapter.c > +++ b/lib/eventdev/rte_event_eth_rx_adapter.c > @@ -102,10 +102,12 @@ struct rte_event_eth_rx_adapter { > uint8_t rss_key_be[RSS_KEY_SIZE]; > /* Event device identifier */ > uint8_t eventdev_id; > - /* Per ethernet device structure */ > - struct eth_device_info *eth_devices; > /* Event port identifier */ > uint8_t event_port_id; > + /* Flag indicating per rxq event buffer */ > + bool use_queue_event_buf; > + /* Per ethernet device structure */ > + struct eth_device_info *eth_devices; > /* Lock to serialize config updates with service function */ > rte_spinlock_t rx_lock; > /* Max mbufs processed in any service function invocation */ > @@ -241,6 +243,7 @@ struct eth_rx_queue_info { > uint32_t flow_id_mask; /* Set to ~0 if app provides flow id else 0 */ > uint64_t event; > struct eth_rx_vector_data vector_data; > + struct rte_eth_event_enqueue_buffer *event_buf; > }; >=20 > static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; > @@ -262,6 +265,18 @@ rxa_validate_id(uint8_t id) > return id < RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE; > } >=20 > +static inline struct rte_eth_event_enqueue_buffer * > +rxa_event_buf_get(struct rte_event_eth_rx_adapter *rx_adapter, > + uint16_t eth_dev_id, uint16_t rx_queue_id) > +{ > + if (rx_adapter->use_queue_event_buf) { > + struct eth_device_info *dev_info =3D > + &rx_adapter->eth_devices[eth_dev_id]; > + return dev_info->rx_queue[rx_queue_id].event_buf; > + } else > + return &rx_adapter->event_enqueue_buffer; > +} > + > #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ > if (!rxa_validate_id(id)) { \ > RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id =3D %d\n", id); \ > @@ -767,10 +782,9 @@ rxa_enq_block_end_ts(struct rte_event_eth_rx_adapter= *rx_adapter, >=20 > /* Enqueue buffered events to event device */ > static inline uint16_t > -rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter) > +rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter, > + struct rte_eth_event_enqueue_buffer *buf) > { > - struct rte_eth_event_enqueue_buffer *buf =3D > - &rx_adapter->event_enqueue_buffer; > struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter->stats; > uint16_t count =3D buf->last ? buf->last - buf->head : buf->count; >=20 > @@ -888,15 +902,14 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *r= x_adapter, > uint16_t eth_dev_id, > uint16_t rx_queue_id, > struct rte_mbuf **mbufs, > - uint16_t num) > + uint16_t num, > + struct rte_eth_event_enqueue_buffer *buf) > { > uint32_t i; > struct eth_device_info *dev_info =3D > &rx_adapter->eth_devices[eth_dev_id]; > struct eth_rx_queue_info *eth_rx_queue_info =3D > &dev_info->rx_queue[rx_queue_id]; > - struct rte_eth_event_enqueue_buffer *buf =3D > - &rx_adapter->event_enqueue_buffer; > uint16_t new_tail =3D buf->tail; > uint64_t event =3D eth_rx_queue_info->event; > uint32_t flow_id_mask =3D eth_rx_queue_info->flow_id_mask; > @@ -995,11 +1008,10 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_ada= pter, > uint16_t queue_id, > uint32_t rx_count, > uint32_t max_rx, > - int *rxq_empty) > + int *rxq_empty, > + struct rte_eth_event_enqueue_buffer *buf) > { > struct rte_mbuf *mbufs[BATCH_SIZE]; > - struct rte_eth_event_enqueue_buffer *buf =3D > - &rx_adapter->event_enqueue_buffer; > struct rte_event_eth_rx_adapter_stats *stats =3D > &rx_adapter->stats; > uint16_t n; > @@ -1012,7 +1024,7 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_adap= ter, > */ > while (rxa_pkt_buf_available(buf)) { > if (buf->count >=3D BATCH_SIZE) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); >=20 > stats->rx_poll_count++; > n =3D rte_eth_rx_burst(port_id, queue_id, mbufs, BATCH_SIZE); > @@ -1021,14 +1033,14 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_ad= apter, > *rxq_empty =3D 1; > break; > } > - rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n); > + rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n, buf); > nb_rx +=3D n; > if (rx_count + nb_rx > max_rx) > break; > } >=20 > if (buf->count > 0) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); >=20 > return nb_rx; > } > @@ -1169,7 +1181,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt= er *rx_adapter) > ring_lock =3D &rx_adapter->intr_ring_lock; >=20 > if (buf->count >=3D BATCH_SIZE) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); >=20 > while (rxa_pkt_buf_available(buf)) { > struct eth_device_info *dev_info; > @@ -1221,7 +1233,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt= er *rx_adapter) > continue; > n =3D rxa_eth_rx(rx_adapter, port, i, nb_rx, > rx_adapter->max_nb_rx, > - &rxq_empty); > + &rxq_empty, buf); > nb_rx +=3D n; >=20 > enq_buffer_full =3D !rxq_empty && n =3D=3D 0; > @@ -1242,7 +1254,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt= er *rx_adapter) > } else { > n =3D rxa_eth_rx(rx_adapter, port, queue, nb_rx, > rx_adapter->max_nb_rx, > - &rxq_empty); > + &rxq_empty, buf); > rx_adapter->qd_valid =3D !rxq_empty; > nb_rx +=3D n; > if (nb_rx > rx_adapter->max_nb_rx) > @@ -1273,13 +1285,12 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adap= ter) > { > uint32_t num_queue; > uint32_t nb_rx =3D 0; > - struct rte_eth_event_enqueue_buffer *buf; > + struct rte_eth_event_enqueue_buffer *buf =3D NULL; > uint32_t wrr_pos; > uint32_t max_nb_rx; >=20 > wrr_pos =3D rx_adapter->wrr_pos; > max_nb_rx =3D rx_adapter->max_nb_rx; > - buf =3D &rx_adapter->event_enqueue_buffer; >=20 > /* Iterate through a WRR sequence */ > for (num_queue =3D 0; num_queue < rx_adapter->wrr_len; num_queue++) { > @@ -1287,24 +1298,31 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adap= ter) > uint16_t qid =3D rx_adapter->eth_rx_poll[poll_idx].eth_rx_qid; > uint16_t d =3D rx_adapter->eth_rx_poll[poll_idx].eth_dev_id; >=20 > + buf =3D rxa_event_buf_get(rx_adapter, d, qid); > + > /* Don't do a batch dequeue from the rx queue if there isn't > * enough space in the enqueue buffer. > */ > if (buf->count >=3D BATCH_SIZE) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); > if (!rxa_pkt_buf_available(buf)) { > - rx_adapter->wrr_pos =3D wrr_pos; > - return nb_rx; > + if (rx_adapter->use_queue_event_buf) > + goto poll_next_entry; > + else { > + rx_adapter->wrr_pos =3D wrr_pos; > + return nb_rx; > + } > } >=20 > nb_rx +=3D rxa_eth_rx(rx_adapter, d, qid, nb_rx, max_nb_rx, > - NULL); > + NULL, buf); > if (nb_rx > max_nb_rx) { > rx_adapter->wrr_pos =3D > (wrr_pos + 1) % rx_adapter->wrr_len; > break; > } >=20 > +poll_next_entry: > if (++wrr_pos =3D=3D rx_adapter->wrr_len) > wrr_pos =3D 0; > } > @@ -1315,12 +1333,13 @@ static void > rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) > { > struct rte_event_eth_rx_adapter *rx_adapter =3D arg; > - struct rte_eth_event_enqueue_buffer *buf =3D > - &rx_adapter->event_enqueue_buffer; > + struct rte_eth_event_enqueue_buffer *buf =3D NULL; > struct rte_event *ev; >=20 > + buf =3D rxa_event_buf_get(rx_adapter, vec->port, vec->queue); > + > if (buf->count) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); >=20 > if (vec->vector_ev->nb_elem =3D=3D 0) > return; > @@ -1947,9 +1966,16 @@ rxa_sw_del(struct rte_event_eth_rx_adapter *rx_ada= pter, > rx_adapter->num_rx_intr -=3D intrq; > dev_info->nb_rx_intr -=3D intrq; > dev_info->nb_shared_intr -=3D intrq && sintrq; > + if (rx_adapter->use_queue_event_buf) { > + struct rte_eth_event_enqueue_buffer *event_buf =3D > + dev_info->rx_queue[rx_queue_id].event_buf; > + rte_free(event_buf->events); > + rte_free(event_buf); > + dev_info->rx_queue[rx_queue_id].event_buf =3D NULL; > + } > } >=20 > -static void > +static int > rxa_add_queue(struct rte_event_eth_rx_adapter *rx_adapter, > struct eth_device_info *dev_info, > int32_t rx_queue_id, > @@ -1961,15 +1987,21 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx= _adapter, > int intrq; > int sintrq; > struct rte_event *qi_ev; > + struct rte_eth_event_enqueue_buffer *new_rx_buf =3D NULL; > + uint16_t eth_dev_id =3D dev_info->dev->data->port_id; > + int ret; >=20 > if (rx_queue_id =3D=3D -1) { > uint16_t nb_rx_queues; > uint16_t i; >=20 > nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > - for (i =3D 0; i < nb_rx_queues; i++) > - rxa_add_queue(rx_adapter, dev_info, i, conf); > - return; > + for (i =3D 0; i < nb_rx_queues; i++) { > + ret =3D rxa_add_queue(rx_adapter, dev_info, i, conf); > + if (ret) > + return ret; > + } > + return 0; > } >=20 > pollq =3D rxa_polled_queue(dev_info, rx_queue_id); > @@ -2032,6 +2064,37 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_= adapter, > dev_info->next_q_idx =3D 0; > } > } > + > + if (!rx_adapter->use_queue_event_buf) > + return 0; > + > + new_rx_buf =3D rte_zmalloc_socket("rx_buffer_meta", > + sizeof(*new_rx_buf), 0, > + rte_eth_dev_socket_id(eth_dev_id)); > + if (new_rx_buf =3D=3D NULL) { > + RTE_EDEV_LOG_ERR("Failed to allocate event buffer meta for " > + "dev_id: %d queue_id: %d", > + eth_dev_id, rx_queue_id); > + return -ENOMEM; > + } > + > + new_rx_buf->events_size =3D RTE_ALIGN(conf->event_buf_size, BATCH_SIZE)= ; > + new_rx_buf->events_size +=3D (2 * BATCH_SIZE); > + new_rx_buf->events =3D rte_zmalloc_socket("rx_buffer", > + sizeof(struct rte_event) * > + new_rx_buf->events_size, 0, > + rte_eth_dev_socket_id(eth_dev_id)); > + if (new_rx_buf->events =3D=3D NULL) { > + rte_free(new_rx_buf); > + RTE_EDEV_LOG_ERR("Failed to allocate event buffer for " > + "dev_id: %d queue_id: %d", > + eth_dev_id, rx_queue_id); > + return -ENOMEM; > + } > + > + queue_info->event_buf =3D new_rx_buf; > + > + return 0; > } >=20 > static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, > @@ -2060,6 +2123,16 @@ static int rxa_sw_add(struct rte_event_eth_rx_adap= ter *rx_adapter, > temp_conf.servicing_weight =3D 1; > } > queue_conf =3D &temp_conf; > + > + if (queue_conf->servicing_weight =3D=3D 0 && > + rx_adapter->use_queue_event_buf) { > + > + RTE_EDEV_LOG_ERR("Use of queue level event buffer " > + "not supported for interrupt queues " > + "dev_id: %d queue_id: %d", > + eth_dev_id, rx_queue_id); > + return -EINVAL; > + } > } >=20 > nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > @@ -2139,7 +2212,9 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapt= er *rx_adapter, >=20 >=20 >=20 > - rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf); > + ret =3D rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf); > + if (ret) > + goto err_free_rxqueue; > rxa_calc_wrr_sequence(rx_adapter, rx_poll, rx_wrr); >=20 > rte_free(rx_adapter->eth_rx_poll); > @@ -2160,7 +2235,7 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapt= er *rx_adapter, > rte_free(rx_poll); > rte_free(rx_wrr); >=20 > - return 0; > + return ret; > } >=20 > static int > @@ -2286,20 +2361,25 @@ rxa_create(uint8_t id, uint8_t dev_id, > rx_adapter->eth_devices[i].dev =3D &rte_eth_devices[i]; >=20 > /* Rx adapter event buffer allocation */ > - buf =3D &rx_adapter->event_enqueue_buffer; > - buf->events_size =3D rxa_params->event_buf_size; > - > - events =3D rte_zmalloc_socket(rx_adapter->mem_name, > - buf->events_size * sizeof(*events), > - 0, socket_id); > - if (events =3D=3D NULL) { > - RTE_EDEV_LOG_ERR("Failed to allocate mem for event buffer\n"); > - rte_free(rx_adapter->eth_devices); > - rte_free(rx_adapter); > - return -ENOMEM; > - } > + rx_adapter->use_queue_event_buf =3D rxa_params->use_queue_event_buf; > + > + if (!rx_adapter->use_queue_event_buf) { > + buf =3D &rx_adapter->event_enqueue_buffer; > + buf->events_size =3D rxa_params->event_buf_size; > + > + events =3D rte_zmalloc_socket(rx_adapter->mem_name, > + buf->events_size * sizeof(*events), > + 0, socket_id); > + if (events =3D=3D NULL) { > + RTE_EDEV_LOG_ERR("Failed to allocate memory " > + "for adapter event buffer"); > + rte_free(rx_adapter->eth_devices); > + rte_free(rx_adapter); > + return -ENOMEM; > + } >=20 > - rx_adapter->event_enqueue_buffer.events =3D events; > + rx_adapter->event_enqueue_buffer.events =3D events; > + } >=20 > event_eth_rx_adapter[id] =3D rx_adapter; >=20 > @@ -2327,6 +2407,7 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uin= t8_t dev_id, >=20 > /* use default values for adapter params */ > rxa_params.event_buf_size =3D ETH_EVENT_BUFFER_SIZE; > + rxa_params.use_queue_event_buf =3D false; >=20 > return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg); > } > @@ -2343,14 +2424,27 @@ rte_event_eth_rx_adapter_create_with_params(uint8= _t id, uint8_t dev_id, > if (port_config =3D=3D NULL) > return -EINVAL; >=20 > - /* use default values if rxa_params is NULL */ > if (rxa_params =3D=3D NULL) { > + /* use default values if rxa_params is NULL */ > rxa_params =3D &temp_params; > rxa_params->event_buf_size =3D ETH_EVENT_BUFFER_SIZE; > - } > - > - if (rxa_params->event_buf_size =3D=3D 0) > + rxa_params->use_queue_event_buf =3D false; > + } else if ((!rxa_params->use_queue_event_buf && > + rxa_params->event_buf_size =3D=3D 0) || > + (rxa_params->use_queue_event_buf && > + rxa_params->event_buf_size !=3D 0)) { > + RTE_EDEV_LOG_ERR("Invalid adapter params\n"); > return -EINVAL; > + } else if (!rxa_params->use_queue_event_buf) { > + /* adjust event buff size with BATCH_SIZE used for fetching > + * packets from NIC rx queues to get full buffer utilization > + * and prevent unnecessary rollovers. > + */ > + > + rxa_params->event_buf_size =3D > + RTE_ALIGN(rxa_params->event_buf_size, BATCH_SIZE); > + rxa_params->event_buf_size +=3D (BATCH_SIZE + BATCH_SIZE); > + } >=20 > pc =3D rte_malloc(NULL, sizeof(*pc), 0); > if (pc =3D=3D NULL) > @@ -2358,14 +2452,6 @@ rte_event_eth_rx_adapter_create_with_params(uint8_= t id, uint8_t dev_id, >=20 > *pc =3D *port_config; >=20 > - /* adjust event buff size with BATCH_SIZE used for fetching packets > - * from NIC rx queues to get full buffer utilization and prevent > - * unnecessary rollovers. > - */ > - rxa_params->event_buf_size =3D RTE_ALIGN(rxa_params->event_buf_size, > - BATCH_SIZE); > - rxa_params->event_buf_size +=3D (BATCH_SIZE + BATCH_SIZE); > - > ret =3D rxa_create(id, dev_id, rxa_params, rxa_default_conf_cb, pc); > if (ret) > rte_free(pc); > @@ -2418,7 +2504,8 @@ rte_event_eth_rx_adapter_free(uint8_t id) > if (rx_adapter->default_cb_arg) > rte_free(rx_adapter->conf_arg); > rte_free(rx_adapter->eth_devices); > - rte_free(rx_adapter->event_enqueue_buffer.events); > + if (!rx_adapter->use_queue_event_buf) > + rte_free(rx_adapter->event_enqueue_buffer.events); > rte_free(rx_adapter); > event_eth_rx_adapter[id] =3D NULL; >=20 > @@ -2522,6 +2609,14 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id, > return -EINVAL; > } >=20 > + if ((rx_adapter->use_queue_event_buf && > + queue_conf->event_buf_size =3D=3D 0) || > + (!rx_adapter->use_queue_event_buf && > + queue_conf->event_buf_size !=3D 0)) { > + RTE_EDEV_LOG_ERR("Invalid Event buffer size for the queue"); > + return -EINVAL; > + } > + > dev_info =3D &rx_adapter->eth_devices[eth_dev_id]; >=20 > if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) { > -- > 2.25.1