From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2D621A0C4C; Tue, 5 Oct 2021 17:02:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10B9C413AF; Tue, 5 Oct 2021 17:02:18 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 51F834139F for ; Tue, 5 Oct 2021 17:02:16 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10128"; a="206577146" X-IronPort-AV: E=Sophos;i="5.85,349,1624345200"; d="scan'208";a="206577146" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Oct 2021 08:01:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,349,1624345200"; d="scan'208";a="589388010" Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19]) by orsmga004.jf.intel.com with ESMTP; 05 Oct 2021 08:01:23 -0700 Received: from orsmsx609.amr.corp.intel.com (10.22.229.22) by ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Tue, 5 Oct 2021 08:01:23 -0700 Received: from orsmsx606.amr.corp.intel.com (10.22.229.19) by ORSMSX609.amr.corp.intel.com (10.22.229.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Tue, 5 Oct 2021 08:01:23 -0700 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12 via Frontend Transport; Tue, 5 Oct 2021 08:01:23 -0700 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.109) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.12; Tue, 5 Oct 2021 08:01:23 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AKYgQBNVPahHMhx26f/NFVlrhN4RWxfvJSByEwNFXwJV03R5udlSr/I9jOSt4En7IxXj/2u9CGqh47PoD6sly5/z2LN/Kp77VEJBW/T9CJAZVZ2YhHrA4BV9f/SPtt727EGOH4uxvtJ1X/ecnpzFAFoUi2jg4qPxsTR2Um3Yd3GgdFhLvI376j6xVoJivkA/F+ClwHjIEUnPphnSe/0rXQ58FOFRfTijOSGRdNfQFBVDFKv0SPlm75fg9OHwbsEh9z74uPuOtY//VaocE59PYJCMslEQvFdCRFpRFo7MHYd0iyYF/9rzBC4ZoK+iPqr6XDII8opTe3qE7lgleemOvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yIAIF3jGHzSmmzNb4LCsHQHXuYoj3mRreR6Qezh0cfM=; b=hr7ZyFS8o+gZW60u8+0gm9zHqPePZINArTnjTvjgTTAA42SV1GqImA6G57C8PMAMCsgJETSthqh5xQDkXTiAVM2MAdaY0ouV/wmnUgzbwHw+Hi0GQY87mohxPQbKhcUBbLIaI5H3AcVyyLhivluFoxS015dHCHyC3dMK7hspXyyRFSj0ZRFU4wT/Bh3wV+k58iwKiwa4dUq5Iru/6Is56CG0ZoksWgQaNK1cHDBiYq+XTiBA0i0YWeL1vC6fSVWGocRK8gEVEkcC9E+t/81x0vqGIagX8R0zDwz8TkkpMTUIHQW4VLE3/71Ubdk20Fv5qnZELsRR7l9mEXtvguAIOQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yIAIF3jGHzSmmzNb4LCsHQHXuYoj3mRreR6Qezh0cfM=; b=BpFEyClEWbrJgFudUYAaC3XTOXyq9pWfBg8+z0cbwAEpou5gKMjpsv1bhwwp05+aSFQPlKm7Houf78+kmUcgqa/0JR7E2ph7LZqYQlLIRXfGO7HTlc+/TUEO7PcEqx1cX5qoIxlUrHOSC3ua4VVKUZHzpshjVIrG6CGrcIQIz5w= Received: from MWHPR1101MB2253.namprd11.prod.outlook.com (2603:10b6:301:52::17) by MWHPR11MB1453.namprd11.prod.outlook.com (2603:10b6:301:c::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.14; Tue, 5 Oct 2021 15:01:21 +0000 Received: from MWHPR1101MB2253.namprd11.prod.outlook.com ([fe80::d56:f55:5cb9:1b28]) by MWHPR1101MB2253.namprd11.prod.outlook.com ([fe80::d56:f55:5cb9:1b28%3]) with mapi id 15.20.4587.018; Tue, 5 Oct 2021 15:01:21 +0000 From: "Jayatheerthan, Jay" To: "Naga Harish K, S V" , "jerinj@marvell.com" CC: "dev@dpdk.org" Thread-Topic: [PATCH v5 4/5] eventdev/rx_adapter: implement per queue event buffer Thread-Index: AQHXuOJ8tV8IB94qOEeT3tF2kyQIE6vEBpbggAB25lCAAAKucA== Date: Tue, 5 Oct 2021 15:01:21 +0000 Message-ID: References: <20210930082901.293541-1-s.v.naga.harish.k@intel.com> <20211004054108.3890018-1-s.v.naga.harish.k@intel.com> <20211004054108.3890018-4-s.v.naga.harish.k@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 76e3d721-29c0-4509-f8a7-08d98810fe20 x-ms-traffictypediagnostic: MWHPR11MB1453: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:792; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: Ps2v8CkOfN7RZtsyGh4UaxxZoniiENH2xGQPOhb29+rTJOh9a0ay2CZQFxzniMwWu4NmYkSDz3yEvFfzds4ABM/0c8kUaoC9DKMeZi77F9ZkZMjxluO7yTzH5GZk+tgKEbAkFAuyFuzCOVhWGc2qTzIBlOEDyleXU5hT7fOUJ1hwX+DkAEDJ+rVmNDMq9KJt+1hJojdHNJ0r60dFQrpgkdTmCc7Xhomye1SGwnsHTtl7WHA90fPidMeyzvEICdLmmu435AkViB+HgPv/Ied5up5pyx1mwJJyJKH3O5APyfJskSGK//R6nPDv38jp+Upnnd16YsURA5vO3rKly3oPLusXmLVgLEQeP685XmtHk7JV9ipOiFzvWuM3UrPn36S3Dq1ua70ahCcoak3wT7s+PIlgSzFHWLt1jkx56U7TvW8MRM02eGEELfbU0GpxapG6YfBDis6txetzJv+1PBvX0QmabHwW9pVJ87PxAq5imBSaOnsfOGYnmkGF1uzSKKvWyVucrs5vjewmTrEd8VzTm77cN/ddHaTwwZHLJhElUsUf90pCk6MRuZK7SZrgi9Wk4wEnNM/E1tmHSWiZzcvcavIjl34pkC/WsI9afKDJ5BGYfDhovgPpN5tzy3na5aCZT1W5HUuKjRyEwhnurIvu5qvfHjLGC4bkrsAnRx/KapuGgLjsxBC3TMEj2GZxogJAhdkr2wRAH0gBw56CjwldYQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MWHPR1101MB2253.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(508600001)(2906002)(38070700005)(66946007)(7696005)(66556008)(66476007)(64756008)(76116006)(55016002)(8676002)(4326008)(26005)(52536014)(30864003)(316002)(66446008)(6506007)(110136005)(33656002)(9686003)(71200400001)(86362001)(83380400001)(53546011)(5660300002)(122000001)(8936002)(186003)(38100700002)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?ewCKzMkRKM+0XC2NAvtrZRcmPIEn8KOaLTGJszNhzNbpzXuXIyBHOpXD5xC+?= =?us-ascii?Q?XK3aXaR0mkbUUUle6OvNxi89JSJY8JJp3YFbGtClY3wibUCB5PczLc0r1gtr?= =?us-ascii?Q?RK+q8IpEvRsOVwXo9kKALudSzvA0PpKdtAAUnTKE9nNkp5+Bm0XXMc1WlRXw?= =?us-ascii?Q?K15fQg5v6kDXtZsBKDxiZ+RA1ZdPfxHjxsXP/fmMu9xM3tFem0FChzFNEPW4?= =?us-ascii?Q?yAjAL/s7C8TFEE7+OwNZocsRTPym5/jzCI8tp3PAB/fKCWpG3diLvZ9kkkIB?= =?us-ascii?Q?OZudJMNKBD8+6u2/0O6xbna4cyddwC1ryWigP4OTV512WfskXMgwYaS68ac6?= =?us-ascii?Q?HSIZUhWttuzNpM564fSEwRCdMlzg7jChA1QfwwfskI9VkNOv540ht3egcZF2?= =?us-ascii?Q?W9OdsSf9GuzDHql4yWkWI2OLUlGgtvdCzZL/ELTy3lKH9C72jHKG2HojwAUc?= =?us-ascii?Q?ISgQYssJoRcnwJoAzSEOWnnW1glEUCm7OVWUTwXte6NW1RuoACwEuD/zE0mF?= =?us-ascii?Q?yd/eKNMuRCVe3foRiXpunChvHX/V/fCAgCGPfhlJxt3mSz2ATuQ74FGgAilp?= =?us-ascii?Q?g+ZXZURoiabir+5QpCJMkvRaOnDZoSdDMiwXTx4nb93t4X0UIzH2JXppZIo2?= =?us-ascii?Q?ZWekm3xP8WdfMx6gfzMn+ZZHOMwLWKnp6dPOqk2n67Lt7NDsdTvTT3cAAoVW?= =?us-ascii?Q?5R6VNubCc+8MCf6o1odS72gGrLYlDVGvXHG8omnWQYsN2A1vYN1YfN1TCn4Q?= =?us-ascii?Q?cnPI1JWhHskInyd6TJEHUyn1FHyLe2qM6emxJygqT+RsY6IBioolyBUQrcJg?= =?us-ascii?Q?HrPdbypn/znVDXsA8yligta0cKzutBwMorzBVv87d1/ZFoDkKhKWEyxq7MG0?= =?us-ascii?Q?gEu8PnmOaTmYdQtikYa91UG4GC2HtPME3YZFPd6F+9hVRe4zhVgEPnbIHi+j?= =?us-ascii?Q?6YnTqnZ3svP+91DVqx338LLb9uLlNFhnQHxYh5Vt+5Yy/fj5JCYG5LzZnpcq?= =?us-ascii?Q?VcB5QIyyfyT0Z66/ZyLREKB25mRAVCJqQrSLhsWOteVboy4SJc32bq4gU0hS?= =?us-ascii?Q?1QIQWfzbMGX5fyN/CM+kI8o4kyOl5/4NVRydMl+LI78gnRbASRwZitjRGHq5?= =?us-ascii?Q?SnjznPCCiBesTmVaCuShFrAIDXcuJN3T8pcGwCmKAA1LDs1ksdOqG/PLv4b6?= =?us-ascii?Q?Pv0ORz78o+Ln50VO4x5JFnWvZDnGJsVQTsCpBFk5xnhjcZa/DSDai4XlO6Sk?= =?us-ascii?Q?cnAU5uyO8NoXz5TwYlUNF//KhMY+S2piqEatzBewFIzjgNEEeVJMc/tOvWG4?= =?us-ascii?Q?u89JxMQSOqd8cpk9B8tdVgfg?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MWHPR1101MB2253.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 76e3d721-29c0-4509-f8a7-08d98810fe20 X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Oct 2021 15:01:21.2314 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: pHWYJB/CAJq1jZcSF+Q05zFmzbI6SzeKKz3HdK2FQL42Mgc/RdEHn/6YBw8njwFqOP2/m9QcoCYCf8e5iG8g6ww00kKc4B6Lq2nEoxdEKfw= X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1453 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v5 4/5] eventdev/rx_adapter: implement per queue event buffer X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Naga Harish K, S V > Sent: Tuesday, October 5, 2021 8:18 PM > To: Jayatheerthan, Jay ; jerinj@marvell.com > Cc: dev@dpdk.org > Subject: RE: [PATCH v5 4/5] eventdev/rx_adapter: implement per queue even= t buffer >=20 > Hi Jay, >=20 > > -----Original Message----- > > From: Jayatheerthan, Jay > > Sent: Tuesday, October 5, 2021 1:26 PM > > To: Naga Harish K, S V ; jerinj@marvell.co= m > > Cc: dev@dpdk.org > > Subject: RE: [PATCH v5 4/5] eventdev/rx_adapter: implement per queue > > event buffer > > > > > -----Original Message----- > > > From: Naga Harish K, S V > > > Sent: Monday, October 4, 2021 11:11 AM > > > To: jerinj@marvell.com; Jayatheerthan, Jay > > > > > > Cc: dev@dpdk.org > > > Subject: [PATCH v5 4/5] eventdev/rx_adapter: implement per queue > > event > > > buffer > > > > > > this patch implement the per queue event buffer with required > > > validations. > > > > > > Signed-off-by: Naga Harish K S V > > > --- > > > lib/eventdev/rte_event_eth_rx_adapter.c | 187 > > > +++++++++++++++++------- > > > 1 file changed, 138 insertions(+), 49 deletions(-) > > > > > > diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c > > > b/lib/eventdev/rte_event_eth_rx_adapter.c > > > index 606db241b8..b61af0e75e 100644 > > > --- a/lib/eventdev/rte_event_eth_rx_adapter.c > > > +++ b/lib/eventdev/rte_event_eth_rx_adapter.c > > > @@ -102,10 +102,12 @@ struct rte_event_eth_rx_adapter { > > > uint8_t rss_key_be[RSS_KEY_SIZE]; > > > /* Event device identifier */ > > > uint8_t eventdev_id; > > > - /* Per ethernet device structure */ > > > - struct eth_device_info *eth_devices; > > > /* Event port identifier */ > > > uint8_t event_port_id; > > > + /* Flag indicating per rxq event buffer */ > > > + bool use_queue_event_buf; > > > + /* Per ethernet device structure */ > > > + struct eth_device_info *eth_devices; > > > /* Lock to serialize config updates with service function */ > > > rte_spinlock_t rx_lock; > > > /* Max mbufs processed in any service function invocation */ @@ > > > -241,6 +243,7 @@ struct eth_rx_queue_info { > > > uint32_t flow_id_mask; /* Set to ~0 if app provides flow id > > else 0 */ > > > uint64_t event; > > > struct eth_rx_vector_data vector_data; > > > + struct rte_eth_event_enqueue_buffer *event_buf; > > > }; > > > > > > static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; @@ > > > -767,10 +770,9 @@ rxa_enq_block_end_ts(struct > > rte_event_eth_rx_adapter > > > *rx_adapter, > > > > > > /* Enqueue buffered events to event device */ static inline uint16_= t > > > -rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter) > > > +rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter, > > > + struct rte_eth_event_enqueue_buffer *buf) > > > { > > > - struct rte_eth_event_enqueue_buffer *buf =3D > > > - &rx_adapter->event_enqueue_buffer; > > > struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter->stats= ; > > > uint16_t count =3D buf->last ? buf->last - buf->head : buf->count; > > > > > > @@ -888,15 +890,14 @@ rxa_buffer_mbufs(struct > > rte_event_eth_rx_adapter *rx_adapter, > > > uint16_t eth_dev_id, > > > uint16_t rx_queue_id, > > > struct rte_mbuf **mbufs, > > > - uint16_t num) > > > + uint16_t num, > > > + struct rte_eth_event_enqueue_buffer *buf) > > > { > > > uint32_t i; > > > struct eth_device_info *dev_info =3D > > > &rx_adapter- > > >eth_devices[eth_dev_id]; > > > struct eth_rx_queue_info *eth_rx_queue_info =3D > > > &dev_info- > > >rx_queue[rx_queue_id]; > > > - struct rte_eth_event_enqueue_buffer *buf =3D > > > - &rx_adapter- > > >event_enqueue_buffer; > > > uint16_t new_tail =3D buf->tail; > > > uint64_t event =3D eth_rx_queue_info->event; > > > uint32_t flow_id_mask =3D eth_rx_queue_info->flow_id_mask; @@ - > > 995,11 > > > +996,10 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_adapter, > > > uint16_t queue_id, > > > uint32_t rx_count, > > > uint32_t max_rx, > > > - int *rxq_empty) > > > + int *rxq_empty, > > > + struct rte_eth_event_enqueue_buffer *buf) > > > { > > > struct rte_mbuf *mbufs[BATCH_SIZE]; > > > - struct rte_eth_event_enqueue_buffer *buf =3D > > > - &rx_adapter- > > >event_enqueue_buffer; > > > struct rte_event_eth_rx_adapter_stats *stats =3D > > > &rx_adapter->stats; > > > uint16_t n; > > > @@ -1012,7 +1012,7 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter > > *rx_adapter, > > > */ > > > while (rxa_pkt_buf_available(buf)) { > > > if (buf->count >=3D BATCH_SIZE) > > > - rxa_flush_event_buffer(rx_adapter); > > > + rxa_flush_event_buffer(rx_adapter, buf); > > > > > > stats->rx_poll_count++; > > > n =3D rte_eth_rx_burst(port_id, queue_id, mbufs, > > BATCH_SIZE); @@ > > > -1021,14 +1021,14 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter > > *rx_adapter, > > > *rxq_empty =3D 1; > > > break; > > > } > > > - rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n); > > > + rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n, > > buf); > > > nb_rx +=3D n; > > > if (rx_count + nb_rx > max_rx) > > > break; > > > } > > > > > > if (buf->count > 0) > > > - rxa_flush_event_buffer(rx_adapter); > > > + rxa_flush_event_buffer(rx_adapter, buf); > > > > > > return nb_rx; > > > } > > > @@ -1169,7 +1169,7 @@ rxa_intr_ring_dequeue(struct > > rte_event_eth_rx_adapter *rx_adapter) > > > ring_lock =3D &rx_adapter->intr_ring_lock; > > > > > > if (buf->count >=3D BATCH_SIZE) > > > - rxa_flush_event_buffer(rx_adapter); > > > + rxa_flush_event_buffer(rx_adapter, buf); > > > > > > while (rxa_pkt_buf_available(buf)) { > > > struct eth_device_info *dev_info; > > > @@ -1221,7 +1221,7 @@ rxa_intr_ring_dequeue(struct > > rte_event_eth_rx_adapter *rx_adapter) > > > continue; > > > n =3D rxa_eth_rx(rx_adapter, port, i, nb_rx, > > > rx_adapter->max_nb_rx, > > > - &rxq_empty); > > > + &rxq_empty, buf); > > > nb_rx +=3D n; > > > > > > enq_buffer_full =3D !rxq_empty && n =3D=3D 0; > > @@ -1242,7 +1242,7 @@ > > > rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapter *rx_adapter) > > > } else { > > > n =3D rxa_eth_rx(rx_adapter, port, queue, nb_rx, > > > rx_adapter->max_nb_rx, > > > - &rxq_empty); > > > + &rxq_empty, buf); > > > rx_adapter->qd_valid =3D !rxq_empty; > > > nb_rx +=3D n; > > > if (nb_rx > rx_adapter->max_nb_rx) @@ -1273,13 > > +1273,12 @@ > > > rxa_poll(struct rte_event_eth_rx_adapter *rx_adapter) { > > > uint32_t num_queue; > > > uint32_t nb_rx =3D 0; > > > - struct rte_eth_event_enqueue_buffer *buf; > > > + struct rte_eth_event_enqueue_buffer *buf =3D NULL; > > > uint32_t wrr_pos; > > > uint32_t max_nb_rx; > > > > > > wrr_pos =3D rx_adapter->wrr_pos; > > > max_nb_rx =3D rx_adapter->max_nb_rx; > > > - buf =3D &rx_adapter->event_enqueue_buffer; > > > > > > /* Iterate through a WRR sequence */ > > > for (num_queue =3D 0; num_queue < rx_adapter->wrr_len; > > num_queue++) { > > > @@ -1287,24 +1286,36 @@ rxa_poll(struct rte_event_eth_rx_adapter > > *rx_adapter) > > > uint16_t qid =3D rx_adapter->eth_rx_poll[poll_idx].eth_rx_qid; > > > uint16_t d =3D rx_adapter->eth_rx_poll[poll_idx].eth_dev_id; > > > > > > + if (rx_adapter->use_queue_event_buf) { > > > + struct eth_device_info *dev_info =3D > > > + &rx_adapter->eth_devices[d]; > > > + buf =3D dev_info->rx_queue[qid].event_buf; > > > + } else > > > + buf =3D &rx_adapter->event_enqueue_buffer; > > > + > > > /* Don't do a batch dequeue from the rx queue if there isn't > > > * enough space in the enqueue buffer. > > > */ > > > if (buf->count >=3D BATCH_SIZE) > > > - rxa_flush_event_buffer(rx_adapter); > > > + rxa_flush_event_buffer(rx_adapter, buf); > > > if (!rxa_pkt_buf_available(buf)) { > > > - rx_adapter->wrr_pos =3D wrr_pos; > > > - return nb_rx; > > > + if (rx_adapter->use_queue_event_buf) > > > + goto poll_next_entry; > > > + else { > > > + rx_adapter->wrr_pos =3D wrr_pos; > > > + return nb_rx; > > > + } > > > } > > > > > > nb_rx +=3D rxa_eth_rx(rx_adapter, d, qid, nb_rx, max_nb_rx, > > > - NULL); > > > + NULL, buf); > > > if (nb_rx > max_nb_rx) { > > > rx_adapter->wrr_pos =3D > > > (wrr_pos + 1) % rx_adapter->wrr_len; > > > break; > > > } > > > > > > +poll_next_entry: > > > if (++wrr_pos =3D=3D rx_adapter->wrr_len) > > > wrr_pos =3D 0; > > > } > > > @@ -1315,12 +1326,18 @@ static void > > > rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) { > > > struct rte_event_eth_rx_adapter *rx_adapter =3D arg; > > > - struct rte_eth_event_enqueue_buffer *buf =3D > > > - &rx_adapter->event_enqueue_buffer; > > > + struct rte_eth_event_enqueue_buffer *buf =3D NULL; > > > struct rte_event *ev; > > > > > > + if (rx_adapter->use_queue_event_buf) { > > > + struct eth_device_info *dev_info =3D > > > + &rx_adapter->eth_devices[vec->port]; > > > + buf =3D dev_info->rx_queue[vec->queue].event_buf; > > > + } else > > > + buf =3D &rx_adapter->event_enqueue_buffer; > > > + > > > > The above code to get the buffer can be made an inline function since i= t is > > needed in more than one place. >=20 > Added new inline function to get event buffer pointer in v6 patch set. >=20 > > > > > if (buf->count) > > > - rxa_flush_event_buffer(rx_adapter); > > > + rxa_flush_event_buffer(rx_adapter, buf); > > > > > > if (vec->vector_ev->nb_elem =3D=3D 0) > > > return; > > > @@ -1947,9 +1964,16 @@ rxa_sw_del(struct rte_event_eth_rx_adapter > > *rx_adapter, > > > rx_adapter->num_rx_intr -=3D intrq; > > > dev_info->nb_rx_intr -=3D intrq; > > > dev_info->nb_shared_intr -=3D intrq && sintrq; > > > + if (rx_adapter->use_queue_event_buf) { > > > + struct rte_eth_event_enqueue_buffer *event_buf =3D > > > + dev_info->rx_queue[rx_queue_id].event_buf; > > > + rte_free(event_buf->events); > > > + rte_free(event_buf); > > > + dev_info->rx_queue[rx_queue_id].event_buf =3D NULL; > > > + } > > > } > > > > > > -static void > > > +static int > > > rxa_add_queue(struct rte_event_eth_rx_adapter *rx_adapter, > > > struct eth_device_info *dev_info, > > > int32_t rx_queue_id, > > > @@ -1961,15 +1985,21 @@ rxa_add_queue(struct > > rte_event_eth_rx_adapter *rx_adapter, > > > int intrq; > > > int sintrq; > > > struct rte_event *qi_ev; > > > + struct rte_eth_event_enqueue_buffer *new_rx_buf =3D NULL; > > > + uint16_t eth_dev_id =3D dev_info->dev->data->port_id; > > > + int ret; > > > > > > if (rx_queue_id =3D=3D -1) { > > > uint16_t nb_rx_queues; > > > uint16_t i; > > > > > > nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > > > - for (i =3D 0; i < nb_rx_queues; i++) > > > - rxa_add_queue(rx_adapter, dev_info, i, conf); > > > - return; > > > + for (i =3D 0; i < nb_rx_queues; i++) { > > > + ret =3D rxa_add_queue(rx_adapter, dev_info, i, conf); > > > + if (ret) > > > + return ret; > > > + } > > > + return 0; > > > } > > > > > > pollq =3D rxa_polled_queue(dev_info, rx_queue_id); @@ -2032,6 > > +2062,37 > > > @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_adapter, > > > dev_info->next_q_idx =3D 0; > > > } > > > } > > > + > > > + if (!rx_adapter->use_queue_event_buf) > > > + return 0; > > > + > > > + new_rx_buf =3D rte_zmalloc_socket("rx_buffer_meta", > > > + sizeof(*new_rx_buf), 0, > > > + rte_eth_dev_socket_id(eth_dev_id)); > > > + if (new_rx_buf =3D=3D NULL) { > > > + RTE_EDEV_LOG_ERR("Failed to allocate event buffer meta > > for " > > > + "dev_id: %d queue_id: %d", > > > + eth_dev_id, rx_queue_id); > > > + return -ENOMEM; > > > + } > > > + > > > + new_rx_buf->events_size =3D RTE_ALIGN(conf->event_buf_size, > > BATCH_SIZE); > > > + new_rx_buf->events_size +=3D (2 * BATCH_SIZE); > > > + new_rx_buf->events =3D rte_zmalloc_socket("rx_buffer", > > > + sizeof(struct rte_event) * > > > + new_rx_buf->events_size, 0, > > > + rte_eth_dev_socket_id(eth_dev_id)); > > > + if (new_rx_buf->events =3D=3D NULL) { > > > + rte_free(new_rx_buf); > > > + RTE_EDEV_LOG_ERR("Failed to allocate event buffer for " > > > + "dev_id: %d queue_id: %d", > > > + eth_dev_id, rx_queue_id); > > > + return -ENOMEM; > > > + } > > > + > > > + queue_info->event_buf =3D new_rx_buf; > > > + > > > + return 0; > > > } > > > > > > static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, @= @ > > > -2060,6 +2121,16 @@ static int rxa_sw_add(struct > > rte_event_eth_rx_adapter *rx_adapter, > > > temp_conf.servicing_weight =3D 1; > > > } > > > queue_conf =3D &temp_conf; > > > + > > > + if (queue_conf->servicing_weight =3D=3D 0 && > > > + rx_adapter->use_queue_event_buf) { > > > + > > > + RTE_EDEV_LOG_ERR("Use of queue level event > > buffer " > > > + "not supported for interrupt queues > > " > > > + "dev_id: %d queue_id: %d", > > > + eth_dev_id, rx_queue_id); > > > + return -EINVAL; > > > + } > > > } > > > > > > nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > > > @@ -2139,7 +2210,9 @@ static int rxa_sw_add(struct > > > rte_event_eth_rx_adapter *rx_adapter, > > > > > > > > > > > > - rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf); > > > + ret =3D rxa_add_queue(rx_adapter, dev_info, rx_queue_id, > > queue_conf); > > > + if (ret) > > > + goto err_free_rxqueue; > > > rxa_calc_wrr_sequence(rx_adapter, rx_poll, rx_wrr); > > > > > > rte_free(rx_adapter->eth_rx_poll); > > > @@ -2160,7 +2233,7 @@ static int rxa_sw_add(struct > > rte_event_eth_rx_adapter *rx_adapter, > > > rte_free(rx_poll); > > > rte_free(rx_wrr); > > > > > > - return 0; > > > + return ret; > > > } > > > > > > static int > > > @@ -2286,20 +2359,26 @@ rxa_create(uint8_t id, uint8_t dev_id, > > > rx_adapter->eth_devices[i].dev =3D &rte_eth_devices[i]; > > > > > > /* Rx adapter event buffer allocation */ > > > - buf =3D &rx_adapter->event_enqueue_buffer; > > > - buf->events_size =3D RTE_ALIGN(rxa_params->event_buf_size, > > BATCH_SIZE); > > > - > > > - events =3D rte_zmalloc_socket(rx_adapter->mem_name, > > > - buf->events_size * sizeof(*events), > > > - 0, socket_id); > > > - if (events =3D=3D NULL) { > > > - RTE_EDEV_LOG_ERR("Failed to allocate mem for event > > buffer\n"); > > > - rte_free(rx_adapter->eth_devices); > > > - rte_free(rx_adapter); > > > - return -ENOMEM; > > > - } > > > + rx_adapter->use_queue_event_buf =3D rxa_params- > > >use_queue_event_buf; > > > + > > > + if (!rx_adapter->use_queue_event_buf) { > > > + buf =3D &rx_adapter->event_enqueue_buffer; > > > + buf->events_size =3D RTE_ALIGN(rxa_params- > > >event_buf_size, > > > + BATCH_SIZE); > > > + > > > + events =3D rte_zmalloc_socket(rx_adapter->mem_name, > > > + buf->events_size * > > sizeof(*events), > > > + 0, socket_id); > > > + if (events =3D=3D NULL) { > > > + RTE_EDEV_LOG_ERR("Failed to allocate memory " > > > + "for adapter event buffer"); > > > + rte_free(rx_adapter->eth_devices); > > > + rte_free(rx_adapter); > > > + return -ENOMEM; > > > + } > > > > > > - rx_adapter->event_enqueue_buffer.events =3D events; > > > + rx_adapter->event_enqueue_buffer.events =3D events; > > > + } > > > > > > event_eth_rx_adapter[id] =3D rx_adapter; > > > > > > @@ -2327,6 +2406,7 @@ rte_event_eth_rx_adapter_create_ext(uint8_t > > id, > > > uint8_t dev_id, > > > > > > /* use default values for adapter params */ > > > rxa_params.event_buf_size =3D ETH_EVENT_BUFFER_SIZE; > > > + rxa_params.use_queue_event_buf =3D false; > > > > > > return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg); } > > @@ > > > -2347,9 +2427,9 @@ > > rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t dev_id, > > > if (rxa_params =3D=3D NULL) { > > > rxa_params =3D &temp_params; > > > rxa_params->event_buf_size =3D ETH_EVENT_BUFFER_SIZE; > > > - } > > > - > > > - if (rxa_params->event_buf_size =3D=3D 0) > > > + rxa_params->use_queue_event_buf =3D false; > > > + } else if ((!rxa_params->use_queue_event_buf && > > > + rxa_params->event_buf_size =3D=3D 0)) My earlier comment applies here. Another error case is configuring both - rxa_params->use_queue_event_buf = =3D=3D true and rxa_params->event_buf_size !=3D 0. > > > return -EINVAL; > > > > > > pc =3D rte_malloc(NULL, sizeof(*pc), 0); @@ -2418,7 +2498,8 @@ > > > rte_event_eth_rx_adapter_free(uint8_t id) > > > if (rx_adapter->default_cb_arg) > > > rte_free(rx_adapter->conf_arg); > > > rte_free(rx_adapter->eth_devices); > > > - rte_free(rx_adapter->event_enqueue_buffer.events); > > > + if (!rx_adapter->use_queue_event_buf) > > > + rte_free(rx_adapter->event_enqueue_buffer.events); > > > rte_free(rx_adapter); > > > event_eth_rx_adapter[id] =3D NULL; > > > > > > @@ -2522,6 +2603,14 @@ rte_event_eth_rx_adapter_queue_add(uint8_t > > id, > > > return -EINVAL; > > > } > > > > > > + if ((rx_adapter->use_queue_event_buf && > > > + queue_conf->event_buf_size =3D=3D 0) || > > > + (!rx_adapter->use_queue_event_buf && > > > + queue_conf->event_buf_size !=3D 0)) { > > > + RTE_EDEV_LOG_ERR("Invalid Event buffer size for the > > queue"); > > > + return -EINVAL; > > > + } > > > + > > > > Another error case is configuring both - rx_adapter->use_queue_event_bu= f > > =3D true and queue_conf->event_buf_size !=3D 0. >=20 > This is valid case. My bad, wrong place. See above. >=20 > > > > > dev_info =3D &rx_adapter->eth_devices[eth_dev_id]; > > > > > > if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) { > > > -- > > > 2.25.1