From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 98B82A0C41;
	Tue,  5 Oct 2021 09:56:01 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 5B5AF41297;
	Tue,  5 Oct 2021 09:56:01 +0200 (CEST)
Received: from mga03.intel.com (mga03.intel.com [134.134.136.65])
 by mails.dpdk.org (Postfix) with ESMTP id E42994068F
 for <dev@dpdk.org>; Tue,  5 Oct 2021 09:55:58 +0200 (CEST)
X-IronPort-AV: E=McAfee;i="6200,9189,10127"; a="225621873"
X-IronPort-AV: E=Sophos;i="5.85,348,1624345200"; d="scan'208";a="225621873"
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 05 Oct 2021 00:55:57 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.85,348,1624345200"; d="scan'208";a="656462384"
Received: from fmsmsx606.amr.corp.intel.com ([10.18.126.86])
 by orsmga005.jf.intel.com with ESMTP; 05 Oct 2021 00:55:57 -0700
Received: from fmsmsx609.amr.corp.intel.com (10.18.126.89) by
 fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12; Tue, 5 Oct 2021 00:55:57 -0700
Received: from fmsmsx605.amr.corp.intel.com (10.18.126.85) by
 fmsmsx609.amr.corp.intel.com (10.18.126.89) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12; Tue, 5 Oct 2021 00:55:56 -0700
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx605.amr.corp.intel.com (10.18.126.85) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12 via Frontend Transport; Tue, 5 Oct 2021 00:55:56 -0700
Received: from NAM04-MW2-obe.outbound.protection.outlook.com (104.47.73.173)
 by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.12; Tue, 5 Oct 2021 00:55:56 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BuC6COoH+6TYLzY/94VYbbyYo3729KP2YShUF1Dm4XRgRlsp9UtJNfNgHjStqExUBq4ItaH3itCRoRi6brNn85SvXIU5m1OZMWNRz8OBRzNkFueqenrwYQIy5lUni/eBXK04K5V/yAnedDSaXu+/GA7XozCT7oAtuQlX5Dyb9eAAZR5/kGJnykZE3pChKa5gxStUCiBHtljxitTLwNTnIkcLDSNnU/pTdxExgjkwM9W+8Zq9X1tHN0exLWW6n5yYUZZi92/0KpsEUKiCJi03WS0r2XLBbCruTgR/WjxwghGcvHl7O8m03TLeTsAucnn2UT9Wq4PN0SmBzQGnRPRTLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=D5gZVndCdXl7Uass5ZiHhe87FGZl+Rdpq9NQiQX94Dw=;
 b=nF3Upa6DjSxBp7ISGG9wBKiLlMWgNJ1x6agmhqUodrVfmHhR+Gfz1jtdq3+pkkpc8AgRr95162ZImwjaxIARk7AljmC/WYQ3Rb2IW1dJjCsJGi7aVPxoUHzhtR4UCrGMNOAv+kUS8ojrcDjNdvG92Z4Ny+RUMsV3ZaAkzzirR3As/cFZFGEtnEEBkvbYpiZpL1OcVC97/oISDyY6SMnAUwxZjXadquyF8+qompGbUj7k2mVIzHBQRtEZB4verkhQBrx+Pt/GdGTPkXucqDN8GAM2BtnPK7gTjLjgMleFIMIRxtC7qFUNnIM8/rqS7E5JGtQC+FK7CtW5WDP+0VJYYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D5gZVndCdXl7Uass5ZiHhe87FGZl+Rdpq9NQiQX94Dw=;
 b=EoPcv7Zf1o6QZgUhqxMSPtnNr2H95aHvsG5VBOhCz4mWqx4ZnJdamzBAIQEONP+NrkcCSy3sLi1qSvilnATxbr4jNC/qUCZ6+NgpuQLpgHMRvF6bQxUGvoRI9teHoLY2t4CL9Of//OIyvETLVKpXVv1v7C6i75MIUHENjpwZsHg=
Received: from BN6SPR00MB239.namprd11.prod.outlook.com (2603:10b6:405:11::23)
 by BN8PR11MB3618.namprd11.prod.outlook.com (2603:10b6:408:90::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.13; Tue, 5 Oct
 2021 07:55:55 +0000
Received: from BN6SPR00MB239.namprd11.prod.outlook.com
 ([fe80::ccd0:582d:7ffc:6102]) by BN6SPR00MB239.namprd11.prod.outlook.com
 ([fe80::ccd0:582d:7ffc:6102%6]) with mapi id 15.20.4566.022; Tue, 5 Oct 2021
 07:55:55 +0000
From: "Jayatheerthan, Jay" <jay.jayatheerthan@intel.com>
To: "Naga Harish K, S V" <s.v.naga.harish.k@intel.com>, "jerinj@marvell.com"
 <jerinj@marvell.com>
CC: "dev@dpdk.org" <dev@dpdk.org>
Thread-Topic: [PATCH v5 4/5] eventdev/rx_adapter: implement per queue event
 buffer
Thread-Index: AQHXuOJ8tV8IB94qOEeT3tF2kyQIE6vEBpbg
Date: Tue, 5 Oct 2021 07:55:55 +0000
Message-ID: <BN6SPR00MB239F2E9351B1BA53EAFE189FDAF9@BN6SPR00MB239.namprd11.prod.outlook.com>
References: <20210930082901.293541-1-s.v.naga.harish.k@intel.com>
 <20211004054108.3890018-1-s.v.naga.harish.k@intel.com>
 <20211004054108.3890018-4-s.v.naga.harish.k@intel.com>
In-Reply-To: <20211004054108.3890018-4-s.v.naga.harish.k@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-product: dlpe-windows
dlp-reaction: no-action
dlp-version: 11.6.200.16
authentication-results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: a64b9c3c-7fed-4d2e-942d-08d987d58f6e
x-ms-traffictypediagnostic: BN8PR11MB3618:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BN8PR11MB3618D1F30D78C53C2EE18B0BFDAF9@BN8PR11MB3618.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:792;
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: pPwOcMZ5gZKjGBk/Cf8x3BslZX7M+PkkNqd5mutskifgvlcr8Epn4apxP+rxRTxdnpBch3mIJgmRe2PuVjtNwPf9d/ux7qcE4EJViGMC0mhFKNwcs5JsVyqyBj1VZIlKXgWocfokZ+2yaBCOPkXxQ972ApPgu6CbhgyUKqMiUWGMIicoxO6pLpKP8/KP6Y67uSXOHrIitb4p0R0T5XqxXuIvC8Dr/HdHeOP1ZX7QSV21V2KbyzBy+p9NDwOGypum7tdC7/3ZBfKYdEnQGpXh7NqEvHPJyhEfOTT+sY+eFjFYW46ntnBO62gRPSywwmMp8Q3g+jCMJPKpyWT5uOH6kYOe2/Z1KX5mjar/0Ry97Ypd5xGBkAyGX9ivEewe9AvVaoo8yh6xicR6oJEIBO3BfpKxUHHWbJtJWOgkZ4p0KpAuyIp43pzOKOFQcbBKNHRc9rcvsGnXf8XvFIaU8ZVf6J55jxTd9EAfUbgyB9cCaholrBqN0ULl+VIyiT93sKOMc9ZiKhbfcWAXxWK7gN2ejyjEv71QLRQsAlmL+VejCxWaEtd2a5qCwXl/Ifkxr/Ow1WlMboJ5vj05JyN8eJlPHijea6ARsEnueApa0DHoaYMmU10xI1rDeuJ02pLz00W/Mg+DVgsk7tMu/VTL571gnmgZ+LXpZZ/DoJaDbzx1aHo2g+UD52V2cUXenWXUWZBF1N6RJocypXyWfKlvpEtnXg==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:BN6SPR00MB239.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(4636009)(366004)(83380400001)(4326008)(186003)(8936002)(76116006)(508600001)(33656002)(66476007)(2906002)(66556008)(26005)(9686003)(55016002)(64756008)(66946007)(66446008)(6506007)(38100700002)(86362001)(8676002)(71200400001)(7696005)(110136005)(52536014)(38070700005)(122000001)(5660300002)(30864003)(316002)(53546011);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?bhjVk9/+uN9MCTOFnz6skEN8m6LjmXcWkmxoLf8LfI+66aI8pBzTjfWeydbx?=
 =?us-ascii?Q?4BEgtNU7ni/5yKHalvdNptgrf07g02K9aPDcJbQsmn4cMafLcCAXwU4Tweto?=
 =?us-ascii?Q?eKq2J348RzoU2pWJSBfdWgg/1tbnCX2mKWQs3qXJlgv0cwomXjNhlvihwoo3?=
 =?us-ascii?Q?XK/x+Js/HBL5aq4CyhRFutnbhsh0AZryaXDPwoly5+I/k52VZEFIvr4rWPOT?=
 =?us-ascii?Q?kFsvFaHeMgehFAxaLCjN0++bK7LZk/TpteSSIYJH7HF1cUlUjwJ0PqYzLpkH?=
 =?us-ascii?Q?BIsEwZefObF9CiXBIHuCiSS4bKJaxRkU7PRGBa2lYi0LiTsSbkAFOOwU/knO?=
 =?us-ascii?Q?uC+SYGlaIxOw3KX2y5yud3KhLWlVfwLLsVmCkd8xvd5jFFcTJHv3w6ctNBDD?=
 =?us-ascii?Q?Yvix1TKea7M3hf0G0TTq/ffJBnLq0LNrkSfUDz4cfPMn1KOgalglW5ie1qwA?=
 =?us-ascii?Q?6UlQ1l/fYJwTPXWwxi8ueZb+Iu9gt2I5RKrLRShj0VLUO2g0T3zTRiyj69mY?=
 =?us-ascii?Q?QjWI2FYUoZucQTJl30EESh+Om6uWQaZHauNzoiJT3TAniN8UO3C2sw2HXpL+?=
 =?us-ascii?Q?ohSIP+a+YjotC3onEYc0q5zQ9Z+b8wkmDgDR+ZfPE2mNjuIou9oGUSJ4v5t0?=
 =?us-ascii?Q?lUsT8lDga+gqk21/AAO/DKPXtuHxw/+cYzmzjeDsYqhjNOGjnwTd2lzRKgSW?=
 =?us-ascii?Q?2YGkfVQ7LX8050FuQN152wYWElYbBZcF+ajjDG7IAiSci+w6PNZVjlVBXW4Z?=
 =?us-ascii?Q?/enYtMdZZXAjgLNq8QsiDFChEXn1xTPMjik53XgTLMTPr3yhs7O/K2hhEf1a?=
 =?us-ascii?Q?vGsUcShRrLzXNsdgvvf8scMnXZ/yzjL4fHbVwBtkW3MaxTWRZfjJJ1zYMY8l?=
 =?us-ascii?Q?hTDmoOhqNZWXTyG14nyUNakQz1ffoWGEgfyIzWhW3kzbn9StymBCXvs+2PyC?=
 =?us-ascii?Q?Th70w1FOFMWcrPNRQT26PoWfkfttw6DmUwbMQZfsYj/3NcUKUCMGZGqRaW4C?=
 =?us-ascii?Q?ntSh9aNfv6CDJnE2D1SWWFcAsFUPvuTuEOKW4je29EuUNp6IGgLza6zOXiug?=
 =?us-ascii?Q?Ip2mffmHIklvyLcmKW/iD9fyErN3HaC2FnaqGm2q7XywZaPcWA85ZJS+q6xo?=
 =?us-ascii?Q?YPow2+FWSnQneedPhcGV/LV5hvDN+Wt2EF06NCToXaOSdrtwCSUQhj1B3AAG?=
 =?us-ascii?Q?XANHoyEwMlIbdZebdRVixy4Om6AZEHw/RpHb/tJdb+/bBSQX10+5gm92fN+p?=
 =?us-ascii?Q?OU6eL6Z3foGP+/eg7iu53tiEWJEjfF2XvSt2AslY0b0RoFhPQXq+C/WXuEPi?=
 =?us-ascii?Q?e1IePIc4y3c75D4iK+Cksjfo?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN6SPR00MB239.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a64b9c3c-7fed-4d2e-942d-08d987d58f6e
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Oct 2021 07:55:55.2858 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 8FPCWUewAFJpHjYtn2jnGmN/I8AC9XlfXI6DR+A4vZoQY420P/foG0EEOgRqgXlmIVj3a47GwtYyOE7DZr9/Aukb4l+dx+xdW+kcpNJqr7Q=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR11MB3618
X-OriginatorOrg: intel.com
Subject: Re: [dpdk-dev] [PATCH v5 4/5] eventdev/rx_adapter: implement per
 queue event buffer
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

> -----Original Message-----
> From: Naga Harish K, S V <s.v.naga.harish.k@intel.com>
> Sent: Monday, October 4, 2021 11:11 AM
> To: jerinj@marvell.com; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> Cc: dev@dpdk.org
> Subject: [PATCH v5 4/5] eventdev/rx_adapter: implement per queue event bu=
ffer
>=20
> this patch implement the per queue event buffer with
> required validations.
>=20
> Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
> ---
>  lib/eventdev/rte_event_eth_rx_adapter.c | 187 +++++++++++++++++-------
>  1 file changed, 138 insertions(+), 49 deletions(-)
>=20
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_e=
vent_eth_rx_adapter.c
> index 606db241b8..b61af0e75e 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.c
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.c
> @@ -102,10 +102,12 @@ struct rte_event_eth_rx_adapter {
>  	uint8_t rss_key_be[RSS_KEY_SIZE];
>  	/* Event device identifier */
>  	uint8_t eventdev_id;
> -	/* Per ethernet device structure */
> -	struct eth_device_info *eth_devices;
>  	/* Event port identifier */
>  	uint8_t event_port_id;
> +	/* Flag indicating per rxq event buffer */
> +	bool use_queue_event_buf;
> +	/* Per ethernet device structure */
> +	struct eth_device_info *eth_devices;
>  	/* Lock to serialize config updates with service function */
>  	rte_spinlock_t rx_lock;
>  	/* Max mbufs processed in any service function invocation */
> @@ -241,6 +243,7 @@ struct eth_rx_queue_info {
>  	uint32_t flow_id_mask;	/* Set to ~0 if app provides flow id else 0 */
>  	uint64_t event;
>  	struct eth_rx_vector_data vector_data;
> +	struct rte_eth_event_enqueue_buffer *event_buf;
>  };
>=20
>  static struct rte_event_eth_rx_adapter **event_eth_rx_adapter;
> @@ -767,10 +770,9 @@ rxa_enq_block_end_ts(struct rte_event_eth_rx_adapter=
 *rx_adapter,
>=20
>  /* Enqueue buffered events to event device */
>  static inline uint16_t
> -rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter)
> +rxa_flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter,
> +		       struct rte_eth_event_enqueue_buffer *buf)
>  {
> -	struct rte_eth_event_enqueue_buffer *buf =3D
> -	    &rx_adapter->event_enqueue_buffer;
>  	struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter->stats;
>  	uint16_t count =3D buf->last ? buf->last - buf->head : buf->count;
>=20
> @@ -888,15 +890,14 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *r=
x_adapter,
>  		uint16_t eth_dev_id,
>  		uint16_t rx_queue_id,
>  		struct rte_mbuf **mbufs,
> -		uint16_t num)
> +		uint16_t num,
> +		struct rte_eth_event_enqueue_buffer *buf)
>  {
>  	uint32_t i;
>  	struct eth_device_info *dev_info =3D
>  					&rx_adapter->eth_devices[eth_dev_id];
>  	struct eth_rx_queue_info *eth_rx_queue_info =3D
>  					&dev_info->rx_queue[rx_queue_id];
> -	struct rte_eth_event_enqueue_buffer *buf =3D
> -					&rx_adapter->event_enqueue_buffer;
>  	uint16_t new_tail =3D buf->tail;
>  	uint64_t event =3D eth_rx_queue_info->event;
>  	uint32_t flow_id_mask =3D eth_rx_queue_info->flow_id_mask;
> @@ -995,11 +996,10 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_adap=
ter,
>  	uint16_t queue_id,
>  	uint32_t rx_count,
>  	uint32_t max_rx,
> -	int *rxq_empty)
> +	int *rxq_empty,
> +	struct rte_eth_event_enqueue_buffer *buf)
>  {
>  	struct rte_mbuf *mbufs[BATCH_SIZE];
> -	struct rte_eth_event_enqueue_buffer *buf =3D
> -					&rx_adapter->event_enqueue_buffer;
>  	struct rte_event_eth_rx_adapter_stats *stats =3D
>  					&rx_adapter->stats;
>  	uint16_t n;
> @@ -1012,7 +1012,7 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_adap=
ter,
>  	 */
>  	while (rxa_pkt_buf_available(buf)) {
>  		if (buf->count >=3D BATCH_SIZE)
> -			rxa_flush_event_buffer(rx_adapter);
> +			rxa_flush_event_buffer(rx_adapter, buf);
>=20
>  		stats->rx_poll_count++;
>  		n =3D rte_eth_rx_burst(port_id, queue_id, mbufs, BATCH_SIZE);
> @@ -1021,14 +1021,14 @@ rxa_eth_rx(struct rte_event_eth_rx_adapter *rx_ad=
apter,
>  				*rxq_empty =3D 1;
>  			break;
>  		}
> -		rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n);
> +		rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n, buf);
>  		nb_rx +=3D n;
>  		if (rx_count + nb_rx > max_rx)
>  			break;
>  	}
>=20
>  	if (buf->count > 0)
> -		rxa_flush_event_buffer(rx_adapter);
> +		rxa_flush_event_buffer(rx_adapter, buf);
>=20
>  	return nb_rx;
>  }
> @@ -1169,7 +1169,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt=
er *rx_adapter)
>  	ring_lock =3D &rx_adapter->intr_ring_lock;
>=20
>  	if (buf->count >=3D BATCH_SIZE)
> -		rxa_flush_event_buffer(rx_adapter);
> +		rxa_flush_event_buffer(rx_adapter, buf);
>=20
>  	while (rxa_pkt_buf_available(buf)) {
>  		struct eth_device_info *dev_info;
> @@ -1221,7 +1221,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt=
er *rx_adapter)
>  					continue;
>  				n =3D rxa_eth_rx(rx_adapter, port, i, nb_rx,
>  					rx_adapter->max_nb_rx,
> -					&rxq_empty);
> +					&rxq_empty, buf);
>  				nb_rx +=3D n;
>=20
>  				enq_buffer_full =3D !rxq_empty && n =3D=3D 0;
> @@ -1242,7 +1242,7 @@ rxa_intr_ring_dequeue(struct rte_event_eth_rx_adapt=
er *rx_adapter)
>  		} else {
>  			n =3D rxa_eth_rx(rx_adapter, port, queue, nb_rx,
>  				rx_adapter->max_nb_rx,
> -				&rxq_empty);
> +				&rxq_empty, buf);
>  			rx_adapter->qd_valid =3D !rxq_empty;
>  			nb_rx +=3D n;
>  			if (nb_rx > rx_adapter->max_nb_rx)
> @@ -1273,13 +1273,12 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adap=
ter)
>  {
>  	uint32_t num_queue;
>  	uint32_t nb_rx =3D 0;
> -	struct rte_eth_event_enqueue_buffer *buf;
> +	struct rte_eth_event_enqueue_buffer *buf =3D NULL;
>  	uint32_t wrr_pos;
>  	uint32_t max_nb_rx;
>=20
>  	wrr_pos =3D rx_adapter->wrr_pos;
>  	max_nb_rx =3D rx_adapter->max_nb_rx;
> -	buf =3D &rx_adapter->event_enqueue_buffer;
>=20
>  	/* Iterate through a WRR sequence */
>  	for (num_queue =3D 0; num_queue < rx_adapter->wrr_len; num_queue++) {
> @@ -1287,24 +1286,36 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adap=
ter)
>  		uint16_t qid =3D rx_adapter->eth_rx_poll[poll_idx].eth_rx_qid;
>  		uint16_t d =3D rx_adapter->eth_rx_poll[poll_idx].eth_dev_id;
>=20
> +		if (rx_adapter->use_queue_event_buf) {
> +			struct eth_device_info *dev_info =3D
> +				&rx_adapter->eth_devices[d];
> +			buf =3D dev_info->rx_queue[qid].event_buf;
> +		} else
> +			buf =3D &rx_adapter->event_enqueue_buffer;
> +
>  		/* Don't do a batch dequeue from the rx queue if there isn't
>  		 * enough space in the enqueue buffer.
>  		 */
>  		if (buf->count >=3D BATCH_SIZE)
> -			rxa_flush_event_buffer(rx_adapter);
> +			rxa_flush_event_buffer(rx_adapter, buf);
>  		if (!rxa_pkt_buf_available(buf)) {
> -			rx_adapter->wrr_pos =3D wrr_pos;
> -			return nb_rx;
> +			if (rx_adapter->use_queue_event_buf)
> +				goto poll_next_entry;
> +			else {
> +				rx_adapter->wrr_pos =3D wrr_pos;
> +				return nb_rx;
> +			}
>  		}
>=20
>  		nb_rx +=3D rxa_eth_rx(rx_adapter, d, qid, nb_rx, max_nb_rx,
> -				NULL);
> +				NULL, buf);
>  		if (nb_rx > max_nb_rx) {
>  			rx_adapter->wrr_pos =3D
>  				    (wrr_pos + 1) % rx_adapter->wrr_len;
>  			break;
>  		}
>=20
> +poll_next_entry:
>  		if (++wrr_pos =3D=3D rx_adapter->wrr_len)
>  			wrr_pos =3D 0;
>  	}
> @@ -1315,12 +1326,18 @@ static void
>  rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg)
>  {
>  	struct rte_event_eth_rx_adapter *rx_adapter =3D arg;
> -	struct rte_eth_event_enqueue_buffer *buf =3D
> -		&rx_adapter->event_enqueue_buffer;
> +	struct rte_eth_event_enqueue_buffer *buf =3D NULL;
>  	struct rte_event *ev;
>=20
> +	if (rx_adapter->use_queue_event_buf) {
> +		struct eth_device_info *dev_info =3D
> +			&rx_adapter->eth_devices[vec->port];
> +		buf =3D dev_info->rx_queue[vec->queue].event_buf;
> +	} else
> +		buf =3D &rx_adapter->event_enqueue_buffer;
> +

The above code to get the buffer can be made an inline function since it is=
 needed in more than one place.

>  	if (buf->count)
> -		rxa_flush_event_buffer(rx_adapter);
> +		rxa_flush_event_buffer(rx_adapter, buf);
>=20
>  	if (vec->vector_ev->nb_elem =3D=3D 0)
>  		return;
> @@ -1947,9 +1964,16 @@ rxa_sw_del(struct rte_event_eth_rx_adapter *rx_ada=
pter,
>  	rx_adapter->num_rx_intr -=3D intrq;
>  	dev_info->nb_rx_intr -=3D intrq;
>  	dev_info->nb_shared_intr -=3D intrq && sintrq;
> +	if (rx_adapter->use_queue_event_buf) {
> +		struct rte_eth_event_enqueue_buffer *event_buf =3D
> +			dev_info->rx_queue[rx_queue_id].event_buf;
> +		rte_free(event_buf->events);
> +		rte_free(event_buf);
> +		dev_info->rx_queue[rx_queue_id].event_buf =3D NULL;
> +	}
>  }
>=20
> -static void
> +static int
>  rxa_add_queue(struct rte_event_eth_rx_adapter *rx_adapter,
>  	struct eth_device_info *dev_info,
>  	int32_t rx_queue_id,
> @@ -1961,15 +1985,21 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx=
_adapter,
>  	int intrq;
>  	int sintrq;
>  	struct rte_event *qi_ev;
> +	struct rte_eth_event_enqueue_buffer *new_rx_buf =3D NULL;
> +	uint16_t eth_dev_id =3D dev_info->dev->data->port_id;
> +	int ret;
>=20
>  	if (rx_queue_id =3D=3D -1) {
>  		uint16_t nb_rx_queues;
>  		uint16_t i;
>=20
>  		nb_rx_queues =3D dev_info->dev->data->nb_rx_queues;
> -		for (i =3D 0; i <	nb_rx_queues; i++)
> -			rxa_add_queue(rx_adapter, dev_info, i, conf);
> -		return;
> +		for (i =3D 0; i <	nb_rx_queues; i++) {
> +			ret =3D rxa_add_queue(rx_adapter, dev_info, i, conf);
> +			if (ret)
> +				return ret;
> +		}
> +		return 0;
>  	}
>=20
>  	pollq =3D rxa_polled_queue(dev_info, rx_queue_id);
> @@ -2032,6 +2062,37 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_=
adapter,
>  				dev_info->next_q_idx =3D 0;
>  		}
>  	}
> +
> +	if (!rx_adapter->use_queue_event_buf)
> +		return 0;
> +
> +	new_rx_buf =3D rte_zmalloc_socket("rx_buffer_meta",
> +				sizeof(*new_rx_buf), 0,
> +				rte_eth_dev_socket_id(eth_dev_id));
> +	if (new_rx_buf =3D=3D NULL) {
> +		RTE_EDEV_LOG_ERR("Failed to allocate event buffer meta for "
> +				 "dev_id: %d queue_id: %d",
> +				 eth_dev_id, rx_queue_id);
> +		return -ENOMEM;
> +	}
> +
> +	new_rx_buf->events_size =3D RTE_ALIGN(conf->event_buf_size, BATCH_SIZE)=
;
> +	new_rx_buf->events_size +=3D (2 * BATCH_SIZE);
> +	new_rx_buf->events =3D rte_zmalloc_socket("rx_buffer",
> +				sizeof(struct rte_event) *
> +				new_rx_buf->events_size, 0,
> +				rte_eth_dev_socket_id(eth_dev_id));
> +	if (new_rx_buf->events =3D=3D NULL) {
> +		rte_free(new_rx_buf);
> +		RTE_EDEV_LOG_ERR("Failed to allocate event buffer for "
> +				 "dev_id: %d queue_id: %d",
> +				 eth_dev_id, rx_queue_id);
> +		return -ENOMEM;
> +	}
> +
> +	queue_info->event_buf =3D new_rx_buf;
> +
> +	return 0;
>  }
>=20
>  static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter,
> @@ -2060,6 +2121,16 @@ static int rxa_sw_add(struct rte_event_eth_rx_adap=
ter *rx_adapter,
>  			temp_conf.servicing_weight =3D 1;
>  		}
>  		queue_conf =3D &temp_conf;
> +
> +		if (queue_conf->servicing_weight =3D=3D 0 &&
> +		    rx_adapter->use_queue_event_buf) {
> +
> +			RTE_EDEV_LOG_ERR("Use of queue level event buffer "
> +					 "not supported for interrupt queues "
> +					 "dev_id: %d queue_id: %d",
> +					 eth_dev_id, rx_queue_id);
> +			return -EINVAL;
> +		}
>  	}
>=20
>  	nb_rx_queues =3D dev_info->dev->data->nb_rx_queues;
> @@ -2139,7 +2210,9 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapt=
er *rx_adapter,
>=20
>=20
>=20
> -	rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf);
> +	ret =3D rxa_add_queue(rx_adapter, dev_info, rx_queue_id, queue_conf);
> +	if (ret)
> +		goto err_free_rxqueue;
>  	rxa_calc_wrr_sequence(rx_adapter, rx_poll, rx_wrr);
>=20
>  	rte_free(rx_adapter->eth_rx_poll);
> @@ -2160,7 +2233,7 @@ static int rxa_sw_add(struct rte_event_eth_rx_adapt=
er *rx_adapter,
>  	rte_free(rx_poll);
>  	rte_free(rx_wrr);
>=20
> -	return 0;
> +	return ret;
>  }
>=20
>  static int
> @@ -2286,20 +2359,26 @@ rxa_create(uint8_t id, uint8_t dev_id,
>  		rx_adapter->eth_devices[i].dev =3D &rte_eth_devices[i];
>=20
>  	/* Rx adapter event buffer allocation */
> -	buf =3D &rx_adapter->event_enqueue_buffer;
> -	buf->events_size =3D RTE_ALIGN(rxa_params->event_buf_size, BATCH_SIZE);
> -
> -	events =3D rte_zmalloc_socket(rx_adapter->mem_name,
> -				    buf->events_size * sizeof(*events),
> -				    0, socket_id);
> -	if (events =3D=3D NULL) {
> -		RTE_EDEV_LOG_ERR("Failed to allocate mem for event buffer\n");
> -		rte_free(rx_adapter->eth_devices);
> -		rte_free(rx_adapter);
> -		return -ENOMEM;
> -	}
> +	rx_adapter->use_queue_event_buf =3D rxa_params->use_queue_event_buf;
> +
> +	if (!rx_adapter->use_queue_event_buf) {
> +		buf =3D &rx_adapter->event_enqueue_buffer;
> +		buf->events_size =3D RTE_ALIGN(rxa_params->event_buf_size,
> +					     BATCH_SIZE);
> +
> +		events =3D rte_zmalloc_socket(rx_adapter->mem_name,
> +					    buf->events_size * sizeof(*events),
> +					    0, socket_id);
> +		if (events =3D=3D NULL) {
> +			RTE_EDEV_LOG_ERR("Failed to allocate memory "
> +					 "for adapter event buffer");
> +			rte_free(rx_adapter->eth_devices);
> +			rte_free(rx_adapter);
> +			return -ENOMEM;
> +		}
>=20
> -	rx_adapter->event_enqueue_buffer.events =3D events;
> +		rx_adapter->event_enqueue_buffer.events =3D events;
> +	}
>=20
>  	event_eth_rx_adapter[id] =3D rx_adapter;
>=20
> @@ -2327,6 +2406,7 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uin=
t8_t dev_id,
>=20
>  	/* use default values for adapter params */
>  	rxa_params.event_buf_size =3D ETH_EVENT_BUFFER_SIZE;
> +	rxa_params.use_queue_event_buf =3D false;
>=20
>  	return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg);
>  }
> @@ -2347,9 +2427,9 @@ rte_event_eth_rx_adapter_create_with_params(uint8_t=
 id, uint8_t dev_id,
>  	if (rxa_params =3D=3D NULL) {
>  		rxa_params =3D &temp_params;
>  		rxa_params->event_buf_size =3D ETH_EVENT_BUFFER_SIZE;
> -	}
> -
> -	if (rxa_params->event_buf_size =3D=3D 0)
> +		rxa_params->use_queue_event_buf =3D false;
> +	} else if ((!rxa_params->use_queue_event_buf &&
> +		    rxa_params->event_buf_size =3D=3D 0))
>  		return -EINVAL;
>=20
>  	pc =3D rte_malloc(NULL, sizeof(*pc), 0);
> @@ -2418,7 +2498,8 @@ rte_event_eth_rx_adapter_free(uint8_t id)
>  	if (rx_adapter->default_cb_arg)
>  		rte_free(rx_adapter->conf_arg);
>  	rte_free(rx_adapter->eth_devices);
> -	rte_free(rx_adapter->event_enqueue_buffer.events);
> +	if (!rx_adapter->use_queue_event_buf)
> +		rte_free(rx_adapter->event_enqueue_buffer.events);
>  	rte_free(rx_adapter);
>  	event_eth_rx_adapter[id] =3D NULL;
>=20
> @@ -2522,6 +2603,14 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id,
>  		return -EINVAL;
>  	}
>=20
> +	if ((rx_adapter->use_queue_event_buf &&
> +	     queue_conf->event_buf_size =3D=3D 0) ||
> +	    (!rx_adapter->use_queue_event_buf &&
> +	     queue_conf->event_buf_size !=3D 0)) {
> +		RTE_EDEV_LOG_ERR("Invalid Event buffer size for the queue");
> +		return -EINVAL;
> +	}
> +

Another error case is configuring both - rx_adapter->use_queue_event_buf =
=3D true and queue_conf->event_buf_size !=3D 0.

>  	dev_info =3D &rx_adapter->eth_devices[eth_dev_id];
>=20
>  	if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) {
> --
> 2.25.1