From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61C83A0C45; Wed, 6 Oct 2021 08:42:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D74A441398; Wed, 6 Oct 2021 08:42:58 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 33703410F2 for ; Wed, 6 Oct 2021 08:42:57 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10128"; a="312135113" X-IronPort-AV: E=Sophos;i="5.85,350,1624345200"; d="scan'208";a="312135113" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Oct 2021 23:42:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,350,1624345200"; d="scan'208";a="484037165" Received: from orsmsx605.amr.corp.intel.com ([10.22.229.18]) by fmsmga007.fm.intel.com with ESMTP; 05 Oct 2021 23:42:55 -0700 Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by ORSMSX605.amr.corp.intel.com (10.22.229.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Tue, 5 Oct 2021 23:42:54 -0700 Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by ORSMSX611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Tue, 5 Oct 2021 23:42:54 -0700 Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12 via Frontend Transport; Tue, 5 Oct 2021 23:42:54 -0700 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.42) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.12; Tue, 5 Oct 2021 23:42:54 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=a8NOeKb2cNDg5/H9a3JQEoM2Q8OKbRTpOV0iMvxOaoYtBV0whVXQDwMI9g+jVbLVEZVL/CUjG+d6qBdAf5VIZcOiEA44MzBCCzqZYK0g+u1QARrh5dPiYlOR2GC8lOvfVXnprurM8RJGAk90BSOCfuRedHoqVY4/vWRPmvFIr5VzAYc7rHKZSgnZZ5e0PUh3VbPaW1oKlfJ8mctkVVLrfyZ4JAl6uerI39RDhh3T50eafFN83ve5ceMq1AkKO/fGKYnRyDq7aldTfRlMagVArDZf8PBBizRdnvUYyI+SZiHHZ7aftYRWGPgnh6uccfRTJ2HZKgQux1qYYMWDA5XSZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GBze2umLAdsjAT+xp/jc0YtV4hztfy+LePI5erkGQWA=; b=PCZPp5oiKPg+uGskwx8Nf8O76wqWCtv/ZpWqVFFqkH5e49sc4zRziTg8ICOXjphYm0ZdEhlVQcpwuUCCtfj4mRsEkzwR4i+lHvCW36MPzddCb2iTxOzn+0f/ZoY021t/1K4ZEWM7l7xTDbM7N76nUBr2Nf0qxsc/bt3Y0NO96czWiBZvzrJCerfsxo2LCfFGCn1H1Qe50aKBhXrXxezwGq6egVskgF7phZ8nCzcovIQ/WqQW0z1EFrN859lZqsGZdlgMgWysMi7DwWJaVUk1jvwxMw0+WNlAZShRH31jSipvHnQ3QpIxTIy7+3q43bYF6d3tq/9b1Z6NDTB9m/sMow== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GBze2umLAdsjAT+xp/jc0YtV4hztfy+LePI5erkGQWA=; b=ifqrT3v9kEIl+iYTKMYudLitciO76NWsrDNrU12Vx2MvNv8lTMPcvEWlggsWuXGApuxvS2e4ujr3oP/bmYcBxqjpfQjtdOt+McqxxMAvH03AvDGapBEBFpHQRoicO395laGfRlUUZTa5NDEZiCBpLHq6b3nx+jstmy4UoQbjx8w= Received: from MWHPR1101MB2253.namprd11.prod.outlook.com (2603:10b6:301:52::17) by CO1PR11MB5105.namprd11.prod.outlook.com (2603:10b6:303:9f::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.16; Wed, 6 Oct 2021 06:42:52 +0000 Received: from MWHPR1101MB2253.namprd11.prod.outlook.com ([fe80::d56:f55:5cb9:1b28]) by MWHPR1101MB2253.namprd11.prod.outlook.com ([fe80::d56:f55:5cb9:1b28%3]) with mapi id 15.20.4587.018; Wed, 6 Oct 2021 06:42:52 +0000 From: "Jayatheerthan, Jay" To: "Naga Harish K, S V" , "jerinj@marvell.com" CC: "dev@dpdk.org" Thread-Topic: [PATCH v7 4/5] eventdev/rx_adapter: implement per queue event buffer Thread-Index: AQHXumcXGG1O6rYES0er8DwMau00o6vFf3Rg Date: Wed, 6 Oct 2021 06:42:52 +0000 Message-ID: References: <20211005143846.1058491-1-s.v.naga.harish.k@intel.com> <20211006040256.1443140-1-s.v.naga.harish.k@intel.com> <20211006040256.1443140-4-s.v.naga.harish.k@intel.com> In-Reply-To: <20211006040256.1443140-4-s.v.naga.harish.k@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: afbde9af-a240-4635-6c06-08d98894859d x-ms-traffictypediagnostic: CO1PR11MB5105: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:923; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: b1OeglupkEfi7nyEYsIsDNB8APOfPmNQQKvqJBqfS4nWoWTxH3ItWNbj/HYsm7xzHTJlcSP/JKOJ1oBtn3jdyN42HODmn1RHfk0Tpno3t7cLqn/3LuPUl1UWpfwLt9TayPWYFuYUKuTbdsuA/CBeaxmLp6EsDnaD0oRA/sxXi1XlZ5x5wE67Q+hVEAD+fwt0srDQWAPhrFkNKvDw/Etv+7WbPlks8h4zFljHLGm7iqvMr+AO08RXg71yONMgo1C8c93t/JTbQHiUqQUoe36tNx0CGVX3cyx2ELY5SZhY/9BRP3bzN9nTNpyOHOJuQOf8MgYD1RCO/48+Qby5AQ5t7c1r7XHqqMfM8L3JCa0HK3VncAgdYZupbpKIfCrn6uQ/8aVAau3X0ROKBkqTit3WCcuP/4LhNeyO7qGVkV7Q9XC1DGg82UuObqSgpxjD5z7afGmTbI+SThkgy5pSfQ5UqU01eW1q0H+2ZjejcIF9Popi0FIRleEj4NvCfrj3iY2o/cy3QbDesowKq0eDHPnAQnzvBTIYNFPLag5ceK9QKvSEvVHux5r64C6i/SAZOdXo+jD5r2O77Mu6OfjCZxiD8gwlsk+mdBaD2RVuNuADV9FZPuighiQNxlSATrm7k0AH1KXUqJG4GqPe2kI1XnHO4CHfYbXnHEQu+vAvAhR+kt4KOM/SrwkxLqooJENvcZX1o/RMlWHhqj5z4wmpM/8nDw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MWHPR1101MB2253.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(55016002)(66946007)(38070700005)(4326008)(2906002)(86362001)(83380400001)(9686003)(66556008)(38100700002)(71200400001)(8676002)(66476007)(186003)(110136005)(7696005)(8936002)(66446008)(53546011)(52536014)(64756008)(30864003)(122000001)(26005)(6506007)(508600001)(316002)(5660300002)(76116006)(33656002)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?tnG8C+Sb/lbQ4iX8Wo7OCFhEMmd9Xx6IDl2NTE1TW+zA1fWyJMYLz/l42Sw9?= =?us-ascii?Q?fe4OIl1tRgKThoXcIqwEqh9TN5FvXc1iAA8roBENk/6KXdASu1Dz1wVudYpK?= =?us-ascii?Q?g1jX7DaGekd0SZa3HDGqSzQpJjqfXFUJflCp3ShYk8/JY+nPhKY6YqI4AGps?= =?us-ascii?Q?HnUYlaqk1ZHNku02239+ss902/4x47s/iK67mzwXMTJ0YcnAupEWtWrq57NS?= =?us-ascii?Q?0J9j+xoudoxsok0Ke4iRs4bJYj2YGe5t4gECoDdHPE/2XH7L91J6dLD6fspZ?= =?us-ascii?Q?axV5VJIRWI4wzhxGT+gKFUTtUlT4G9ey4S4O13c+1xh0KUCkkseQ8Z0+WvRY?= =?us-ascii?Q?seCUd5HtSUu/PKf1pnWZp0nXVa1g1YcdffVy+I4R6Ac7BN3MZPGxQqJU8ecG?= =?us-ascii?Q?pXM1TfvVTokiuTBazi6K7zlyn0kLtAGgExms6cvPzrJL5o+0uNNKtpRr3BxY?= =?us-ascii?Q?cZFEC8eR/rHSwI/zoz3SG8r20JcraodbYVA7bu+iCdOuelifDcXABtvV4JwX?= =?us-ascii?Q?KTrhAvUYp/Mnd/BQqiE/d7WwRAQbXxV/VoDwAmQ6eFKC5vT/e0IydRgSkUqw?= =?us-ascii?Q?ECqY27GWpJmEpf38dSrXdW1PONx06Gntvk1ZR70xljVXSXvLlJBf56M5pjlT?= =?us-ascii?Q?wLSW35SXjU07g+Ar1ZJ8f9jWNdGEW7/Uqheo/xhV0bqWUc0iocw4drjNhY2A?= =?us-ascii?Q?6NF4QMcCI+qj2/3YRbNdK95umMFqWvMnDbJ+7ztw2BUMJFvCKTWJzdsqwI7y?= =?us-ascii?Q?HTMU8Hymdp/EJ1iIyH+R2jeDhLYuOqCrTcC7BXKhjatAeJVIyNQCiHeCuhb9?= =?us-ascii?Q?V0ThyJ4QiiS65tktWAfeS3LGcCHCE8IDZ1f7UsHJvxBeRbx3H7OlOZb/XX18?= =?us-ascii?Q?bozwVz87/M204nLjpLXkFejG//y43IWeRJUNg39+UicnLz3ooWW9tSi5j+wt?= =?us-ascii?Q?F6UMz2yCjg0U7K7GwH29c+yaOTYUXJqbqpydb4L10GYeWU0xNvdyVpk0XNXb?= =?us-ascii?Q?NW8ZNnQlEWzl4gO6brqyqNtoi4AsXvSJ8/yG/NqanQR8Zav1QMnuvYS4bk4c?= =?us-ascii?Q?NjAl/emYcRJFMVBqa30oJRLN0NkKL24KA3dtn8V24sZhKL/jVPOZJw9mx7wU?= =?us-ascii?Q?yS+W0fPe6T4Jx1hVt602MrjXZLoE24V05Lyj25fF09jcX4eSEkDW/8a4gIoX?= =?us-ascii?Q?JsWT+1BD5R5kdRJ+OFGr/quZUk1s3I1tHE+eP0uBRLkXASCQS60kowz109kR?= =?us-ascii?Q?XxuipVVAFc8K0SzjMERpn8hLdmU2V5mCNdBaKcDjFqZC9lyAljAFv/6cU8mR?= =?us-ascii?Q?L+AbIlVFOT43X0EwxOrorVcH?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MWHPR1101MB2253.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: afbde9af-a240-4635-6c06-08d98894859d X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Oct 2021 06:42:52.6410 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: y7pz1zdXPGfP8+ftpFu5jrv+gL1ps5OnOKUdUqcuCRCQ1gCn6EHEYir7xm+G5bPwY22thzARijj0rmJ5HBIIBiCabGvkTckSXeZ9Vr65i0A= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR11MB5105 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v7 4/5] eventdev/rx_adapter: implement per queue event buffer X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Naga Harish K, S V > Sent: Wednesday, October 6, 2021 9:33 AM > To: jerinj@marvell.com; Jayatheerthan, Jay > Cc: dev@dpdk.org > Subject: [PATCH v7 4/5] eventdev/rx_adapter: implement per queue event bu= ffer >=20 > this patch implement the per queue event buffer with > required validations. >=20 > Signed-off-by: Naga Harish K S V > --- > lib/eventdev/rte_event_eth_rx_adapter.c | 206 ++++++++++++++++++------ > 1 file changed, 153 insertions(+), 53 deletions(-) >=20 > diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_e= vent_eth_rx_adapter.c > index 5ccea168ea..1a2aa23475 100644 > --- a/lib/eventdev/rte_event_eth_rx_adapter.c > +++ b/lib/eventdev/rte_event_eth_rx_adapter.c > @@ -102,10 +102,12 @@ struct rte_event_eth_rx_adapter { > uint8_t rss_key_be[RSS_KEY_SIZE]; > /* Event device identifier */ > uint8_t eventdev_id; > - /* Per ethernet device structure */ > - struct eth_device_info *eth_devices; > /* Event port identifier */ > uint8_t event_port_id; > + /* Flag indicating per rxq event buffer */ > + bool use_queue_event_buf; > + /* Per ethernet device structure */ > + struct eth_device_info *eth_devices; > /* Lock to serialize config updates with service function */ > rte_spinlock_t rx_lock; > /* Max mbufs processed in any service function invocation */ > @@ -241,6 +243,7 @@ struct eth_rx_queue_info { > uint32_t flow_id_mask; /* Set to ~0 if app provides flow id else 0 */ > uint64_t event; > struct eth_rx_vector_data vector_data; > + struct rte_eth_event_enqueue_buffer *event_buf; > }; >=20 > static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; > @@ -262,6 +265,22 @@ rxa_validate_id(uint8_t id) > return id < RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE; > } >=20 > +static inline struct rte_eth_event_enqueue_buffer * > +rxa_event_buf_get(struct rte_event_eth_rx_adapter *rx_adapter, > + uint16_t eth_dev_id, uint16_t rx_queue_id) > +{ > + struct rte_eth_event_enqueue_buffer *buf =3D NULL; > + > + if (rx_adapter->use_queue_event_buf) { > + struct eth_device_info *dev_info =3D > + &rx_adapter->eth_devices[eth_dev_id]; > + buf =3D dev_info->rx_queue[rx_queue_id].event_buf; We can return here. It may save an instr or two. > + } else > + buf =3D &rx_adapter->event_enqueue_buffer; Same here. > + > + return buf; > +} > + > #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ > if (!rxa_validate_id(id)) { \ > RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id =3D %d\n", id); \ > @@ -767,10 +786,9 @@ rxa_enq_block_end_ts(struct rte_event_eth_rx_adapter= *rx_adapter, >=20 > /* Enqueue buffered events to event device */ > static inline uint16_t > -rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter) > +rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter, > + struct rte_eth_event_enqueue_buffer *buf) > { > - struct rte_eth_event_enqueue_buffer *buf =3D > - &rx_adapter->event_enqueue_buffer; > struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter->stats; > uint16_t count =3D buf->last ? buf->last - buf->head : buf->count; >=20 > @@ -888,15 +906,14 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *r= x_adapter, > uint16_t eth_dev_id, > uint16_t rx_queue_id, > struct rte_mbuf **mbufs, > - uint16_t num) > + uint16_t num, > + struct rte_eth_event_enqueue_buffer *buf) > { > uint32_t i; > struct eth_device_info *dev_info =3D > &rx_adapter->eth_devices[eth_dev_id]; > struct eth_rx_queue_info *eth_rx_queue_info =3D > &dev_info->rx_queue[rx_queue_id]; > - struct rte_eth_event_enqueue_buffer *buf =3D > - &rx_adapter->event_enqueue_buffer; > uint16_t new_tail =3D buf->tail; > uint64_t event =3D eth_rx_queue_info->event; > uint32_t flow_id_mask =3D eth_rx_queue_info->flow_id_mask; > @@ -995,11 +1012,10 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_ada= pter, > uint16_t queue_id, > uint32_t rx_count, > uint32_t max_rx, > - int *rxq_empty) > + int *rxq_empty, > + struct rte_eth_event_enqueue_buffer *buf) > { > struct rte_mbuf *mbufs[BATCH_SIZE]; > - struct rte_eth_event_enqueue_buffer *buf =3D > - &rx_adapter->event_enqueue_buffer; > struct rte_event_eth_rx_adapter_stats *stats =3D > &rx_adapter->stats; > uint16_t n; > @@ -1012,7 +1028,7 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_adap= ter, > */ > while (rxa_pkt_buf_available(buf)) { > if (buf->count >=3D BATCH_SIZE) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); >=20 > stats->rx_poll_count++; > n =3D rte_eth_rx_burst(port_id, queue_id, mbufs, BATCH_SIZE); > @@ -1021,14 +1037,14 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_ad= apter, > *rxq_empty =3D 1; > break; > } > - rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n); > + rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n, buf); > nb_rx +=3D n; > if (rx_count + nb_rx > max_rx) > break; > } >=20 > if (buf->count > 0) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); >=20 > return nb_rx; > } > @@ -1169,7 +1185,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt= er *rx_adapter) > ring_lock =3D &rx_adapter->intr_ring_lock; >=20 > if (buf->count >=3D BATCH_SIZE) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); >=20 > while (rxa_pkt_buf_available(buf)) { > struct eth_device_info *dev_info; > @@ -1221,7 +1237,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt= er *rx_adapter) > continue; > n =3D rxa_eth_rx(rx_adapter, port, i, nb_rx, > rx_adapter->max_nb_rx, > - &rxq_empty); > + &rxq_empty, buf); > nb_rx +=3D n; >=20 > enq_buffer_full =3D !rxq_empty && n =3D=3D 0; > @@ -1242,7 +1258,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt= er *rx_adapter) > } else { > n =3D rxa_eth_rx(rx_adapter, port, queue, nb_rx, > rx_adapter->max_nb_rx, > - &rxq_empty); > + &rxq_empty, buf); > rx_adapter->qd_valid =3D !rxq_empty; > nb_rx +=3D n; > if (nb_rx > rx_adapter->max_nb_rx) > @@ -1273,13 +1289,12 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adap= ter) > { > uint32_t num_queue; > uint32_t nb_rx =3D 0; > - struct rte_eth_event_enqueue_buffer *buf; > + struct rte_eth_event_enqueue_buffer *buf =3D NULL; > uint32_t wrr_pos; > uint32_t max_nb_rx; >=20 > wrr_pos =3D rx_adapter->wrr_pos; > max_nb_rx =3D rx_adapter->max_nb_rx; > - buf =3D &rx_adapter->event_enqueue_buffer; >=20 > /* Iterate through a WRR sequence */ > for (num_queue =3D 0; num_queue < rx_adapter->wrr_len; num_queue++) { > @@ -1287,24 +1302,31 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adap= ter) > uint16_t qid =3D rx_adapter->eth_rx_poll[poll_idx].eth_rx_qid; > uint16_t d =3D rx_adapter->eth_rx_poll[poll_idx].eth_dev_id; >=20 > + buf =3D rxa_event_buf_get(rx_adapter, d, qid); > + > /* Don't do a batch dequeue from the rx queue if there isn't > * enough space in the enqueue buffer. > */ > if (buf->count >=3D BATCH_SIZE) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); > if (!rxa_pkt_buf_available(buf)) { > - rx_adapter->wrr_pos =3D wrr_pos; > - return nb_rx; > + if (rx_adapter->use_queue_event_buf) > + goto poll_next_entry; > + else { > + rx_adapter->wrr_pos =3D wrr_pos; > + return nb_rx; > + } > } >=20 > nb_rx +=3D rxa_eth_rx(rx_adapter, d, qid, nb_rx, max_nb_rx, > - NULL); > + NULL, buf); > if (nb_rx > max_nb_rx) { > rx_adapter->wrr_pos =3D > (wrr_pos + 1) % rx_adapter->wrr_len; > break; > } >=20 > +poll_next_entry: > if (++wrr_pos =3D=3D rx_adapter->wrr_len) > wrr_pos =3D 0; > } > @@ -1315,12 +1337,13 @@ static void > rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) > { > struct rte_event_eth_rx_adapter *rx_adapter =3D arg; > - struct rte_eth_event_enqueue_buffer *buf =3D > - &rx_adapter->event_enqueue_buffer; > + struct rte_eth_event_enqueue_buffer *buf =3D NULL; > struct rte_event *ev; >=20 > + buf =3D rxa_event_buf_get(rx_adapter, vec->port, vec->queue); > + > if (buf->count) > - rxa_flush_event_buffer(rx_adapter); > + rxa_flush_event_buffer(rx_adapter, buf); >=20 > if (vec->vector_ev->nb_elem =3D=3D 0) > return; > @@ -1947,9 +1970,16 @@ rxa_sw_del(struct rte_event_eth_rx_adapter *rx_ada= pter, > rx_adapter->num_rx_intr -=3D intrq; > dev_info->nb_rx_intr -=3D intrq; > dev_info->nb_shared_intr -=3D intrq && sintrq; > + if (rx_adapter->use_queue_event_buf) { > + struct rte_eth_event_enqueue_buffer *event_buf =3D > + dev_info->rx_queue[rx_queue_id].event_buf; > + rte_free(event_buf->events); > + rte_free(event_buf); > + dev_info->rx_queue[rx_queue_id].event_buf =3D NULL; > + } > } >=20 > -static void > +static int > rxa_add_queue(struct rte_event_eth_rx_adapter *rx_adapter, > struct eth_device_info *dev_info, > int32_t rx_queue_id, > @@ -1961,15 +1991,21 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx= _adapter, > int intrq; > int sintrq; > struct rte_event *qi_ev; > + struct rte_eth_event_enqueue_buffer *new_rx_buf =3D NULL; > + uint16_t eth_dev_id =3D dev_info->dev->data->port_id; > + int ret; >=20 > if (rx_queue_id =3D=3D -1) { > uint16_t nb_rx_queues; > uint16_t i; >=20 > nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > - for (i =3D 0; i < nb_rx_queues; i++) > - rxa_add_queue(rx_adapter, dev_info, i, conf); > - return; > + for (i =3D 0; i < nb_rx_queues; i++) { > + ret =3D rxa_add_queue(rx_adapter, dev_info, i, conf); > + if (ret) > + return ret; > + } > + return 0; > } >=20 > pollq =3D rxa_polled_queue(dev_info, rx_queue_id); > @@ -2032,6 +2068,37 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_= adapter, > dev_info->next_q_idx =3D 0; > } > } > + > + if (!rx_adapter->use_queue_event_buf) > + return 0; > + > + new_rx_buf =3D rte_zmalloc_socket("rx_buffer_meta", > + sizeof(*new_rx_buf), 0, > + rte_eth_dev_socket_id(eth_dev_id)); > + if (new_rx_buf =3D=3D NULL) { > + RTE_EDEV_LOG_ERR("Failed to allocate event buffer meta for " > + "dev_id: %d queue_id: %d", > + eth_dev_id, rx_queue_id); > + return -ENOMEM; > + } > + > + new_rx_buf->events_size =3D RTE_ALIGN(conf->event_buf_size, BATCH_SIZE)= ; > + new_rx_buf->events_size +=3D (2 * BATCH_SIZE); > + new_rx_buf->events =3D rte_zmalloc_socket("rx_buffer", > + sizeof(struct rte_event) * > + new_rx_buf->events_size, 0, > + rte_eth_dev_socket_id(eth_dev_id)); > + if (new_rx_buf->events =3D=3D NULL) { > + rte_free(new_rx_buf); > + RTE_EDEV_LOG_ERR("Failed to allocate event buffer for " > + "dev_id: %d queue_id: %d", > + eth_dev_id, rx_queue_id); > + return -ENOMEM; > + } > + > + queue_info->event_buf =3D new_rx_buf; > + > + return 0; > } >=20 > static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, > @@ -2060,6 +2127,16 @@ static int rxa_sw_add(struct rte_event_eth_rx_adap= ter *rx_adapter, > temp_conf.servicing_weight =3D 1; > } > queue_conf =3D &temp_conf; > + > + if (queue_conf->servicing_weight =3D=3D 0 && > + rx_adapter->use_queue_event_buf) { > + > + RTE_EDEV_LOG_ERR("Use of queue level event buffer " > + "not supported for interrupt queues " > + "dev_id: %d queue_id: %d", > + eth_dev_id, rx_queue_id); > + return -EINVAL; > + } > } >=20 > nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > @@ -2139,7 +2216,9 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapt= er *rx_adapter, >=20 >=20 >=20 > - rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf); > + ret =3D rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf); > + if (ret) > + goto err_free_rxqueue; > rxa_calc_wrr_sequence(rx_adapter, rx_poll, rx_wrr); >=20 > rte_free(rx_adapter->eth_rx_poll); > @@ -2160,7 +2239,7 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapt= er *rx_adapter, > rte_free(rx_poll); > rte_free(rx_wrr); >=20 > - return 0; > + return ret; > } >=20 > static int > @@ -2286,20 +2365,25 @@ rxa_create(uint8_t id, uint8_t dev_id, > rx_adapter->eth_devices[i].dev =3D &rte_eth_devices[i]; >=20 > /* Rx adapter event buffer allocation */ > - buf =3D &rx_adapter->event_enqueue_buffer; > - buf->events_size =3D rxa_params->event_buf_size; > - > - events =3D rte_zmalloc_socket(rx_adapter->mem_name, > - buf->events_size * sizeof(*events), > - 0, socket_id); > - if (events =3D=3D NULL) { > - RTE_EDEV_LOG_ERR("Failed to allocate mem for event buffer\n"); > - rte_free(rx_adapter->eth_devices); > - rte_free(rx_adapter); > - return -ENOMEM; > - } > + rx_adapter->use_queue_event_buf =3D rxa_params->use_queue_event_buf; > + > + if (!rx_adapter->use_queue_event_buf) { > + buf =3D &rx_adapter->event_enqueue_buffer; > + buf->events_size =3D rxa_params->event_buf_size; > + > + events =3D rte_zmalloc_socket(rx_adapter->mem_name, > + buf->events_size * sizeof(*events), > + 0, socket_id); > + if (events =3D=3D NULL) { > + RTE_EDEV_LOG_ERR("Failed to allocate memory " > + "for adapter event buffer"); > + rte_free(rx_adapter->eth_devices); > + rte_free(rx_adapter); > + return -ENOMEM; > + } >=20 > - rx_adapter->event_enqueue_buffer.events =3D events; > + rx_adapter->event_enqueue_buffer.events =3D events; > + } >=20 > event_eth_rx_adapter[id] =3D rx_adapter; >=20 > @@ -2327,6 +2411,7 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uin= t8_t dev_id, >=20 > /* use default values for adapter params */ > rxa_params.event_buf_size =3D ETH_EVENT_BUFFER_SIZE; > + rxa_params.use_queue_event_buf =3D false; >=20 > return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg); > } > @@ -2346,11 +2431,15 @@ rte_event_eth_rx_adapter_create_with_params(uint8= _t id, uint8_t dev_id, > /* use default values if rxa_params is NULL */ > if (rxa_params =3D=3D NULL) { > rxa_params =3D &temp_params; > - rxa_params->event_buf_size =3D ETH_EVENT_BUFFER_SIZE; > - } > - > - if (rxa_params->event_buf_size =3D=3D 0) > + rxa_params->event_buf_size =3D 4 * BATCH_SIZE; This assumes ETH_EVENT_BUFFER_SIZE to be set to 6 * BATCH_SIZE so we can ha= ve 4x here and 2x later. It may break if ETH_EVENT_BUFFER_SIZE is changed l= ater. Can we change the code to just use ETH_EVENT_BUFFER_SIZE here. See below. > + rxa_params->use_queue_event_buf =3D false; > + } else if ((!rxa_params->use_queue_event_buf && > + rxa_params->event_buf_size =3D=3D 0) || > + (rxa_params->use_queue_event_buf && > + rxa_params->event_buf_size !=3D 0)) { > + RTE_EDEV_LOG_ERR("Invalid adapter params\n"); > return -EINVAL; > + } >=20 > pc =3D rte_malloc(NULL, sizeof(*pc), 0); > if (pc =3D=3D NULL) > @@ -2362,9 +2451,11 @@ rte_event_eth_rx_adapter_create_with_params(uint8_= t id, uint8_t dev_id, > * from NIC rx queues to get full buffer utilization and prevent > * unnecessary rollovers. > */ > - rxa_params->event_buf_size =3D RTE_ALIGN(rxa_params->event_buf_size, > - BATCH_SIZE); > - rxa_params->event_buf_size +=3D (BATCH_SIZE + BATCH_SIZE); > + if (!rxa_params->use_queue_event_buf) { > + rxa_params->event_buf_size =3D > + RTE_ALIGN(rxa_params->event_buf_size, BATCH_SIZE); > + rxa_params->event_buf_size +=3D (BATCH_SIZE + BATCH_SIZE); > + } Above if condition can be added as an else part of rxa_params =3D=3D NULL. = Something like: if (rxa_params =3D=3D NULL) { rxa_params =3D &temp_params; rxa_params->event_buf_size =3D ETH_EVENT_BUFFER_SIZE; rxa_params->use_queue_event_buf =3D false; } else if ((!rxa_params->use_queue_event_buf && rxa_params->event_buf_size =3D=3D 0) || (rxa_params->use_queue_event_buf && rxa_params->event_buf_size !=3D 0)) { RTE_EDEV_LOG_ERR("Invalid adapter params\n"); return -EINVAL; } else if (!rxa_params->use_queue_event_buf) { rxa_params->event_buf_size =3D RTE_ALIGN(rxa_params->event_buf_size, BATCH_SIZE); rxa_params->event_buf_size +=3D (BATCH_SIZE + BATCH_SIZE); } >=20 > ret =3D rxa_create(id, dev_id, rxa_params, rxa_default_conf_cb, pc); > if (ret) > @@ -2418,7 +2509,8 @@ rte_event_eth_rx_adapter_free(uint8_t id) > if (rx_adapter->default_cb_arg) > rte_free(rx_adapter->conf_arg); > rte_free(rx_adapter->eth_devices); > - rte_free(rx_adapter->event_enqueue_buffer.events); > + if (!rx_adapter->use_queue_event_buf) > + rte_free(rx_adapter->event_enqueue_buffer.events); > rte_free(rx_adapter); > event_eth_rx_adapter[id] =3D NULL; >=20 > @@ -2522,6 +2614,14 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id, > return -EINVAL; > } >=20 > + if ((rx_adapter->use_queue_event_buf && > + queue_conf->event_buf_size =3D=3D 0) || > + (!rx_adapter->use_queue_event_buf && > + queue_conf->event_buf_size !=3D 0)) { > + RTE_EDEV_LOG_ERR("Invalid Event buffer size for the queue"); > + return -EINVAL; > + } > + > dev_info =3D &rx_adapter->eth_devices[eth_dev_id]; >=20 > if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) { > -- > 2.25.1