From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 4C225A0C45;
	Thu, 28 Oct 2021 10:10:56 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id DC93A4067B;
	Thu, 28 Oct 2021 10:10:55 +0200 (CEST)
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by mails.dpdk.org (Postfix) with ESMTP id 2A2364003F
 for <dev@dpdk.org>; Thu, 28 Oct 2021 10:10:53 +0200 (CEST)
X-IronPort-AV: E=McAfee;i="6200,9189,10150"; a="253910114"
X-IronPort-AV: E=Sophos;i="5.87,189,1631602800"; d="scan'208";a="253910114"
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 28 Oct 2021 01:10:53 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.87,189,1631602800"; d="scan'208";a="555814372"
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by fmsmga004.fm.intel.com with ESMTP; 28 Oct 2021 01:10:53 -0700
Received: from fmsmsx607.amr.corp.intel.com (10.18.126.87) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12; Thu, 28 Oct 2021 01:10:52 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx607.amr.corp.intel.com (10.18.126.87) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12 via Frontend Transport; Thu, 28 Oct 2021 01:10:52 -0700
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.44) by
 edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.12; Thu, 28 Oct 2021 01:10:52 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ec6vINZu0k2RYLFXulJ/eQ19ZY+h50G1vcoVICuK4ADd/noHSlVLFnUzFkh6XSUBOxG7y2m6C3tVV5EhK1em6y0Q9HsvAyZbDEuEK2kX0l08razPGIa+IAdnyvDx66yrZCeCLwlkug2is8Dr/8zJBis850PtEdyvByTltL+WJ+CqUQG2mBmdVBxBH6N28u/vsHO/ae88OFVVFcJ0W2446PlQlZtfIfhTwXQYewoEF8PFRjFYj72rbsp+r/R5jiTONhkoYvFhotboqvtYQLW2ELibq8CQZbIkIrNCiAJMqryxmnrsoN96in+78/3vc3BrwokGXcS7WeSfKtFoWbTn5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NllJUmyDvm7HwNHaRg3YD+cNGySCqzcGj/Y2ZUosrNg=;
 b=k6ySBa5WhQjUWislVaT6wQWaO9LWJDfvp0hfq1yZG4QDsXxDLpYQUjcnqjkaoiTTYfQWj/MJfWNLHEcRIU6nG7jBHmd5r2Or1EcKT1HPBRi5ddvGn1rkXoyNWOHSH2YxYcadlmnrAlDiy52ck/Dqy2CYqkg1J/2M96zAf1HEglx9w0pGkXuXD8Wj3N6Cl2f+uFVQUIvVlUlprHIduzRGN9DnMbEtA2aIGL+YqVH99TTC6Jf3mBXAYermIzsllqSr3ST/iGI/VuPQXCsJpQRp9WuSFvukhCeWAK0hfJMY36+lSxBEUUZNUfluNgaEz9Q1J0HBBi1VJa3XA5ARXDGnIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NllJUmyDvm7HwNHaRg3YD+cNGySCqzcGj/Y2ZUosrNg=;
 b=uyUUUkbKJrq9joBU3MjwPSg18lhvv0Gea54qIOr0bEkD6D/7dy/v1CRopc5YVd9Fv5LMWS9nC/PsD+WSEy729ywhaBfbzsKd3zBNSdq8M1CZ2X5FFPrzWDhftpLsD5InuUM10tqeHb8DBZoLrMW469FFuyLAeHwO4bRa1sTk3qE=
Received: from DM6PR11MB4348.namprd11.prod.outlook.com (2603:10b6:5:1db::18)
 by DM6PR11MB3482.namprd11.prod.outlook.com (2603:10b6:5:58::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.20; Thu, 28 Oct
 2021 08:10:49 +0000
Received: from DM6PR11MB4348.namprd11.prod.outlook.com
 ([fe80::e00e:1103:ed26:7d97]) by DM6PR11MB4348.namprd11.prod.outlook.com
 ([fe80::e00e:1103:ed26:7d97%6]) with mapi id 15.20.4628.020; Thu, 28 Oct 2021
 08:10:49 +0000
From: "Jayatheerthan, Jay" <jay.jayatheerthan@intel.com>
To: "Naga Harish K, S V" <s.v.naga.harish.k@intel.com>, "jerinj@marvell.com"
 <jerinj@marvell.com>
CC: "dev@dpdk.org" <dev@dpdk.org>
Thread-Topic: [PATCH v2 1/3] eventdev/eth_rx: add queue stats get and reset
 APIs
Thread-Index: AQHXy8pqnLBAY/lKIUSPOePNOFiCY6voDHjA
Date: Thu, 28 Oct 2021 08:10:49 +0000
Message-ID: <DM6PR11MB4348607CA844B4A6024D84FEFD869@DM6PR11MB4348.namprd11.prod.outlook.com>
References: <20211028045430.2989816-1-s.v.naga.harish.k@intel.com>
 <20211028070640.3051397-1-s.v.naga.harish.k@intel.com>
In-Reply-To: <20211028070640.3051397-1-s.v.naga.harish.k@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-product: dlpe-windows
dlp-reaction: no-action
dlp-version: 11.6.200.16
authentication-results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c1a61f31-1ab6-4984-709b-08d999ea73ca
x-ms-traffictypediagnostic: DM6PR11MB3482:
x-microsoft-antispam-prvs: <DM6PR11MB3482C9A4558767922290D194FD869@DM6PR11MB3482.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:209;
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 7BOwdweBDJw+J/BjuF2Re7mtyPtfBfuinXxjAgIWaYZ2VeffaPcB9NgR9x1aUjcKw+jkr5FuzrMqt6dntghGnd2tU/vY6PkAGCmvlgsGD9t1zn9irg2AaCX8JzlkC9LGAj0JUrySAca2arWG0GPjlzv5+F+zgD8xWT7HGFPx7R1ZamakDtMVRHpWOZjkbKbk1LdjZm9v9fcjiY7nPbqWS1IYplIurGeDtvexv93/7KoUPXXdoPzwC/szjXWA39IhkWdE7zRq4pcrzZEJ36tXXr6YPW4cFUpwizWHkSOBGdefMgzcV0mj3nFYqReq/dcrPA0HbIluvGOJ47chqExCGbA6q5ihuC00AIxFkdqRMco8AeSkx2gwlPOBSWnB2lh6Uop/vJ91KqufMslc3KMPcIYJHEpV0FUHDbyR/R4N/CCvcyVSAI6GwueSWhVpijGulua9e/+WA9qFQaiqutOh/UrVzS9rb62dITZ7Zddfh5cLcfH4AXVtxhkoJGuWad7hi0NxDhxoBd6qtVFKpavNK4b82oJWQpHEtjteWCElv9wrvfP12FwmqsAX2bG/ZEi6falJy8yV6kq/PZCcHg9tVZjuMwgCregefHM+h3O2/dIOn6I7mQ4YTiz310kwBTCvS9XcNPB4eIcmBzRir/PWr/LkqXk8528wS1itWQ3mmO8XrmaTaZgrK3eEi8o6rWND0vHDEOfJbE32QSmwQ5D+dQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:DM6PR11MB4348.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(366004)(8936002)(26005)(2906002)(55016002)(4326008)(38100700002)(122000001)(71200400001)(110136005)(5660300002)(33656002)(66556008)(8676002)(86362001)(316002)(508600001)(76116006)(83380400001)(7696005)(186003)(66446008)(64756008)(66476007)(66946007)(53546011)(82960400001)(55236004)(6506007)(38070700005)(30864003)(52536014)(9686003)(579004)(559001);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?ibx2Hapa8WhuLOm/rir9XkmA48ucLHJhaI0C+tJm9fX2j9mX/c8aYa9PpUwe?=
 =?us-ascii?Q?q9N6dOHp2429rcxA9P1uslW5Oyd0ZimhEi7sXl/oneZcsvXBQr3D8Hv/W0IT?=
 =?us-ascii?Q?ZcnR2EZmNCc49NCIv2+nqrg0dWS9vkfJg8hT9X9N0DxzwQvVLnAyoGdG/LzU?=
 =?us-ascii?Q?gtCrAGp5QD79Xs/fY+CTm+r3IhhaGE8LFupe8yA7jx5wPghtAvt6zB9X5lr0?=
 =?us-ascii?Q?S86dbIywzVjxojnOVkvtR3N0f1N+0zhNUk+NFmQ2+xVD0BLyCZT7Kg5oJhuB?=
 =?us-ascii?Q?ZwR/KQk3yGRFSNWRfTRe1oVW7sda/CAd66wh2hzI/xrd9+UV6PpQInusbgOY?=
 =?us-ascii?Q?hWcz2lWEpLvrAudOaAGmel1Nk70yFJsSZEw/cnNcItSuRUtpbVoHOLWPE/F2?=
 =?us-ascii?Q?tqHh+BmgNr+1t10DNKa5GPmo00JVoew7KfvFt4Mh/4VUHuQ6rK90J+lKe18k?=
 =?us-ascii?Q?FcrEsUHG3Q/QWPQY8wzMOZal1P4XyRxtreOnLbobujg/jKEyakZXT4qaeuKn?=
 =?us-ascii?Q?NNlqLMG5oYQXWTvHeZ/t8MTdVoE0cNlk+0HpULMpRzMY82Vmqg7ZQLs0sIe+?=
 =?us-ascii?Q?TeKEdtcqMFsvAXZpQ6LDDyS0RoiWWuOafI54X2MQH7f85cHl/g24M/zMVaQM?=
 =?us-ascii?Q?PaFOJEs0+i9nxI++b5tp5xrr2INe2ut0YAJyBu9B7r0FXMskC4R7mFxmBppY?=
 =?us-ascii?Q?EVdgLTy6VWnLtZJO0AydQEi+jhdKifuCq/VU8Wzb+J+09Kf03Q+iuFI1ESqV?=
 =?us-ascii?Q?a8hmeULdP/yM1AWZ1xvwLXGjB/4hqMPEpWuDuXw7m4ZTHmPtggXTWCPmqRRf?=
 =?us-ascii?Q?zBeyUkHiehfiU0HQtL5Z5naJ1foDUNI5KdQcIywt1D+/O6If73uls6CMOPwv?=
 =?us-ascii?Q?XaCOnOvR9MJ9aYQSAB8q9EF1a/0BVSEiMdrdlCH8jsMjX2pQR9gCYZ/nEgv3?=
 =?us-ascii?Q?WXhFzHmTDUP5ePXhJt88R3xVA7Nz4BtCBn58hWerC77cuOwLfQepokN4gJPl?=
 =?us-ascii?Q?li8Y/c1f48JXyjmV13Zdy4YkLBi7j8Y2EV72aSPNBbSJztWpsHzYL/j1PpVf?=
 =?us-ascii?Q?uuZjzHf0mYYG0i0LHWTiHIwkobC/Fko9X2nZZS/mKWPJ2YE72UaGYHgh90b6?=
 =?us-ascii?Q?9i1IGSCcWErk6u58AutUd1t3tWt1C9BAl19ddyb4QwQxun8K6bfxzsMWaU5O?=
 =?us-ascii?Q?SWx9BUG7xsctRv60/+uY0o1dVMktvn8Z8tjB/1EUuCGb+UUgO1wxQa5PjIga?=
 =?us-ascii?Q?oOWOyrkEmsze1a+IyWVj1TGaeBxU9lWsRKwYYajLp+O1T8KTdN7OxlYpPcTy?=
 =?us-ascii?Q?4wMokcpyQb5jTyWJPKMEezw4PN7FKKSnCbdNOQdYngIES6hNNVCUc4pWik11?=
 =?us-ascii?Q?sFgjeX6sIhjjBW3tx2ZKrVpPsLezRJLOUJ9QKkUFPYzrDmuyjFej1ce7WFYG?=
 =?us-ascii?Q?NHgTqqPqI/Moe4NDLQxjRQGef+rl9B6Ffs+S5zLnNo25Xk/eLzG/fBZIfCX3?=
 =?us-ascii?Q?dLA0tlHGzaK4VHuDTVx9SzNsH49MbKV1K3HdTPIrUJ8/KYmf7yR7KZhsrHrw?=
 =?us-ascii?Q?F9YY2JJ3/giiOCBoqGQZ1LQUrSEZRJx8SBuLf/dN8sCKcJ3fPr3X7O0/9y/k?=
 =?us-ascii?Q?03Ago1Esi4HUT2R3jHwdsfQdngCeeoOn4onPZ1vtbdgUSQDZTRviVbBqrCYX?=
 =?us-ascii?Q?aUrg9Q=3D=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR11MB4348.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1a61f31-1ab6-4984-709b-08d999ea73ca
X-MS-Exchange-CrossTenant-originalarrivaltime: 28 Oct 2021 08:10:49.2330 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: X52NVE21laC37RSBbr0byLEacBpB04wGcOoLFc7dxfxMCriTJSXDQ8H5CeH4TStJrNlFxHCNZ+uffkkYQmXst0TeEn+pFck+Y4gqmU8F0OE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR11MB3482
X-OriginatorOrg: intel.com
Subject: Re: [dpdk-dev] [PATCH v2 1/3] eventdev/eth_rx: add queue stats get
 and reset APIs
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

> -----Original Message-----
> From: Naga Harish K, S V <s.v.naga.harish.k@intel.com>
> Sent: Thursday, October 28, 2021 12:37 PM
> To: jerinj@marvell.com; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> Cc: dev@dpdk.org
> Subject: [PATCH v2 1/3] eventdev/eth_rx: add queue stats get and reset AP=
Is
>=20
> This patch adds new api ``rte_event_eth_rx_adapter_queue_stats_get`` to
> retrieve queue stats. The queue stats are in the format
> ``struct rte_event_eth_rx_adapter_queue_stats``.
>=20
> For resetting the queue stats,
> ``rte_event_eth_rx_adapter_queue_stats_reset`` api is added.
>=20
> The adapter stats_get and stats_reset apis are also updated to
> handle queue level event buffer use case.
>=20
> Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
> ---
> v2:
> * added pmd callback support for adapter queue_stats_get and
>   queue_stats_reset apis.
> ---
>  .../prog_guide/event_ethernet_rx_adapter.rst  |  11 +
>  lib/eventdev/eventdev_pmd.h                   |  52 ++++
>  lib/eventdev/rte_event_eth_rx_adapter.c       | 268 +++++++++++++++---
>  lib/eventdev/rte_event_eth_rx_adapter.h       |  66 +++++
>  lib/eventdev/version.map                      |   2 +
>  5 files changed, 356 insertions(+), 43 deletions(-)
>=20
> diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst b/doc/gu=
ides/prog_guide/event_ethernet_rx_adapter.rst
> index 8b58130fc5..67b11e1563 100644
> --- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
> +++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
> @@ -166,6 +166,17 @@ flags for handling received packets, event queue ide=
ntifier, scheduler type,
>  event priority, polling frequency of the receive queue and flow identifi=
er
>  in struct ``rte_event_eth_rx_adapter_queue_conf``.
>=20
> +Getting and resetting Adapter queue stats
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +The ``rte_event_eth_rx_adapter_queue_stats_get()`` function reports
> +adapter queue counters defined in struct ``rte_event_eth_rx_adapter_queu=
e_stats``.
> +This function reports queue level stats only when queue level event buff=
er is
> +used otherwise it returns -EINVAL.
> +
> +The ``rte_event_eth_rx_adapter_queue_stats_reset`` function can be used =
to
> +reset queue level stats when queue level event buffer is in use.
> +
>  Interrupt Based Rx Queues
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~
>=20
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index d009e24309..3ba49d1fd4 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -749,6 +749,53 @@ typedef int (*eventdev_eth_rx_adapter_stats_get)
>  typedef int (*eventdev_eth_rx_adapter_stats_reset)
>  			(const struct rte_eventdev *dev,
>  			const struct rte_eth_dev *eth_dev);
> +
> +struct rte_event_eth_rx_adapter_queue_stats;
> +
> +/**
> + * Retrieve ethernet Rx adapter queue statistics.
> + *
> + * @param dev
> + *   Event device pointer
> + *
> + * @param eth_dev
> + *   Ethernet device pointer
> + *
> + * @param rx_queue_id
> + *  Ethernet device receive queue index.
> + *
> + * @param[out] q_stats
> + *   Pointer to queue stats structure
> + *
> + * @return
> + *   Return 0 on success.
> + */
> +typedef int (*eventdev_eth_rx_adapter_q_stats_get)
> +			(const struct rte_eventdev *dev,
> +			 const struct rte_eth_dev *eth_dev,
> +			 uint16_t rx_queue_id,
> +			 struct rte_event_eth_rx_adapter_queue_stats *q_stats);
> +
> +/**
> + * Reset ethernet Rx adapter queue statistics.
> + *
> + * @param dev
> + *   Event device pointer
> + *
> + * @param eth_dev
> + *   Ethernet device pointer
> + *
> + * @param rx_queue_id
> + *  Ethernet device receive queue index.
> + *
> + * @return
> + *   Return 0 on success.
> + */
> +typedef int (*eventdev_eth_rx_adapter_q_stats_reset)
> +			(const struct rte_eventdev *dev,
> +			 const struct rte_eth_dev *eth_dev,
> +			 uint16_t rx_queue_id);
> +
>  /**
>   * Start eventdev selftest.
>   *
> @@ -1224,6 +1271,11 @@ struct eventdev_ops {
>  	eventdev_crypto_adapter_stats_reset crypto_adapter_stats_reset;
>  	/**< Reset crypto stats */
>=20
> +	eventdev_eth_rx_adapter_q_stats_get eth_rx_adapter_queue_stats_get;
> +	/**< Get ethernet Rx queue stats */
> +	eventdev_eth_rx_adapter_q_stats_reset eth_rx_adapter_queue_stats_reset;
> +	/**< Reset ethernet Rx queue stats */
> +
>  	eventdev_eth_tx_adapter_caps_get_t eth_tx_adapter_caps_get;
>  	/**< Get ethernet Tx adapter capabilities */
>=20
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_e=
vent_eth_rx_adapter.c
> index a175c61551..31bbceb6c8 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.c
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.c
> @@ -245,6 +245,10 @@ struct eth_rx_queue_info {
>  	uint64_t event;
>  	struct eth_rx_vector_data vector_data;
>  	struct eth_event_enqueue_buffer *event_buf;
> +	/* use adapter stats struct for queue level stats,
> +	 * as same stats need to be updated for adapter and queue
> +	 */
> +	struct rte_event_eth_rx_adapter_stats *stats;
>  };
>=20
>  static struct event_eth_rx_adapter **event_eth_rx_adapter;
> @@ -268,14 +272,18 @@ rxa_validate_id(uint8_t id)
>=20
>  static inline struct eth_event_enqueue_buffer *
>  rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_=
dev_id,
> -		  uint16_t rx_queue_id)
> +		  uint16_t rx_queue_id,
> +		  struct rte_event_eth_rx_adapter_stats **stats)
>  {
>  	if (rx_adapter->use_queue_event_buf) {
>  		struct eth_device_info *dev_info =3D
>  			&rx_adapter->eth_devices[eth_dev_id];
> +		*stats =3D dev_info->rx_queue[rx_queue_id].stats;
>  		return dev_info->rx_queue[rx_queue_id].event_buf;
> -	} else
> +	} else {
> +		*stats =3D &rx_adapter->stats;
>  		return &rx_adapter->event_enqueue_buffer;
> +	}
>  }
>=20
>  #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \
> @@ -766,9 +774,9 @@ rxa_enq_block_end_ts(struct event_eth_rx_adapter *rx_=
adapter,
>  /* Enqueue buffered events to event device */
>  static inline uint16_t
>  rxa_flush_event_buffer(struct event_eth_rx_adapter *rx_adapter,
> -		       struct eth_event_enqueue_buffer *buf)
> +		       struct eth_event_enqueue_buffer *buf,
> +		       struct rte_event_eth_rx_adapter_stats *stats)
>  {
> -	struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter->stats;
>  	uint16_t count =3D buf->last ? buf->last - buf->head : buf->count;
>=20
>  	if (!count)
> @@ -883,7 +891,8 @@ rxa_create_event_vector(struct event_eth_rx_adapter *=
rx_adapter,
>  static inline void
>  rxa_buffer_mbufs(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_d=
ev_id,
>  		 uint16_t rx_queue_id, struct rte_mbuf **mbufs, uint16_t num,
> -		 struct eth_event_enqueue_buffer *buf)
> +		 struct eth_event_enqueue_buffer *buf,
> +		 struct rte_event_eth_rx_adapter_stats *stats)
>  {
>  	uint32_t i;
>  	struct eth_device_info *dev_info =3D
> @@ -954,7 +963,7 @@ rxa_buffer_mbufs(struct event_eth_rx_adapter *rx_adap=
ter, uint16_t eth_dev_id,
>  		else
>  			num =3D nb_cb;
>  		if (dropped)
> -			rx_adapter->stats.rx_dropped +=3D dropped;
> +			stats->rx_dropped +=3D dropped;
>  	}
>=20
>  	buf->count +=3D num;
> @@ -985,11 +994,10 @@ rxa_pkt_buf_available(struct eth_event_enqueue_buff=
er *buf)
>  static inline uint32_t
>  rxa_eth_rx(struct event_eth_rx_adapter *rx_adapter, uint16_t port_id,
>  	   uint16_t queue_id, uint32_t rx_count, uint32_t max_rx,
> -	   int *rxq_empty, struct eth_event_enqueue_buffer *buf)
> +	   int *rxq_empty, struct eth_event_enqueue_buffer *buf,
> +	   struct rte_event_eth_rx_adapter_stats *stats)
>  {
>  	struct rte_mbuf *mbufs[BATCH_SIZE];
> -	struct rte_event_eth_rx_adapter_stats *stats =3D
> -					&rx_adapter->stats;
>  	uint16_t n;
>  	uint32_t nb_rx =3D 0;
>=20
> @@ -1000,7 +1008,7 @@ rxa_eth_rx(struct event_eth_rx_adapter *rx_adapter,=
 uint16_t port_id,
>  	 */
>  	while (rxa_pkt_buf_available(buf)) {
>  		if (buf->count >=3D BATCH_SIZE)
> -			rxa_flush_event_buffer(rx_adapter, buf);
> +			rxa_flush_event_buffer(rx_adapter, buf, stats);
>=20
>  		stats->rx_poll_count++;
>  		n =3D rte_eth_rx_burst(port_id, queue_id, mbufs, BATCH_SIZE);
> @@ -1009,14 +1017,17 @@ rxa_eth_rx(struct event_eth_rx_adapter *rx_adapte=
r, uint16_t port_id,
>  				*rxq_empty =3D 1;
>  			break;
>  		}
> -		rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n, buf);
> +		rxa_buffer_mbufs(rx_adapter, port_id, queue_id, mbufs, n, buf,
> +				 stats);
>  		nb_rx +=3D n;
>  		if (rx_count + nb_rx > max_rx)
>  			break;
>  	}
>=20
>  	if (buf->count > 0)
> -		rxa_flush_event_buffer(rx_adapter, buf);
> +		rxa_flush_event_buffer(rx_adapter, buf, stats);
> +
> +	stats->rx_packets +=3D nb_rx;
>=20
>  	return nb_rx;
>  }
> @@ -1135,28 +1146,30 @@ rxa_intr_thread(void *arg)
>  /* Dequeue <port, q> from interrupt ring and enqueue received
>   * mbufs to eventdev
>   */
> -static inline uint32_t
> +static inline void
>  rxa_intr_ring_dequeue(struct event_eth_rx_adapter *rx_adapter)
>  {
>  	uint32_t n;
>  	uint32_t nb_rx =3D 0;
>  	int rxq_empty;
>  	struct eth_event_enqueue_buffer *buf;
> +	struct rte_event_eth_rx_adapter_stats *stats;
>  	rte_spinlock_t *ring_lock;
>  	uint8_t max_done =3D 0;
>=20
>  	if (rx_adapter->num_rx_intr =3D=3D 0)
> -		return 0;
> +		return;
>=20
>  	if (rte_ring_count(rx_adapter->intr_ring) =3D=3D 0
>  		&& !rx_adapter->qd_valid)
> -		return 0;
> +		return;
>=20
>  	buf =3D &rx_adapter->event_enqueue_buffer;
> +	stats =3D &rx_adapter->stats;
>  	ring_lock =3D &rx_adapter->intr_ring_lock;
>=20
>  	if (buf->count >=3D BATCH_SIZE)
> -		rxa_flush_event_buffer(rx_adapter, buf);
> +		rxa_flush_event_buffer(rx_adapter, buf, stats);
>=20
>  	while (rxa_pkt_buf_available(buf)) {
>  		struct eth_device_info *dev_info;
> @@ -1208,7 +1221,7 @@ rxa_intr_ring_dequeue(struct event_eth_rx_adapter *=
rx_adapter)
>  					continue;
>  				n =3D rxa_eth_rx(rx_adapter, port, i, nb_rx,
>  					rx_adapter->max_nb_rx,
> -					&rxq_empty, buf);
> +					&rxq_empty, buf, stats);
>  				nb_rx +=3D n;
>=20
>  				enq_buffer_full =3D !rxq_empty && n =3D=3D 0;
> @@ -1229,7 +1242,7 @@ rxa_intr_ring_dequeue(struct event_eth_rx_adapter *=
rx_adapter)
>  		} else {
>  			n =3D rxa_eth_rx(rx_adapter, port, queue, nb_rx,
>  				rx_adapter->max_nb_rx,
> -				&rxq_empty, buf);
> +				&rxq_empty, buf, stats);
>  			rx_adapter->qd_valid =3D !rxq_empty;
>  			nb_rx +=3D n;
>  			if (nb_rx > rx_adapter->max_nb_rx)
> @@ -1239,7 +1252,6 @@ rxa_intr_ring_dequeue(struct event_eth_rx_adapter *=
rx_adapter)
>=20
>  done:
>  	rx_adapter->stats.rx_intr_packets +=3D nb_rx;
> -	return nb_rx;
>  }
>=20
>  /*
> @@ -1255,12 +1267,13 @@ rxa_intr_ring_dequeue(struct event_eth_rx_adapter=
 *rx_adapter)
>   * the hypervisor's switching layer where adjustments can be made to dea=
l with
>   * it.
>   */
> -static inline uint32_t
> +static inline void
>  rxa_poll(struct event_eth_rx_adapter *rx_adapter)
>  {
>  	uint32_t num_queue;
>  	uint32_t nb_rx =3D 0;
>  	struct eth_event_enqueue_buffer *buf =3D NULL;
> +	struct rte_event_eth_rx_adapter_stats *stats =3D NULL;
>  	uint32_t wrr_pos;
>  	uint32_t max_nb_rx;
>=20
> @@ -1273,24 +1286,24 @@ rxa_poll(struct event_eth_rx_adapter *rx_adapter)
>  		uint16_t qid =3D rx_adapter->eth_rx_poll[poll_idx].eth_rx_qid;
>  		uint16_t d =3D rx_adapter->eth_rx_poll[poll_idx].eth_dev_id;
>=20
> -		buf =3D rxa_event_buf_get(rx_adapter, d, qid);
> +		buf =3D rxa_event_buf_get(rx_adapter, d, qid, &stats);
>=20
>  		/* Don't do a batch dequeue from the rx queue if there isn't
>  		 * enough space in the enqueue buffer.
>  		 */
>  		if (buf->count >=3D BATCH_SIZE)
> -			rxa_flush_event_buffer(rx_adapter, buf);
> +			rxa_flush_event_buffer(rx_adapter, buf, stats);
>  		if (!rxa_pkt_buf_available(buf)) {
>  			if (rx_adapter->use_queue_event_buf)
>  				goto poll_next_entry;
>  			else {
>  				rx_adapter->wrr_pos =3D wrr_pos;
> -				return nb_rx;
> +				return;
>  			}
>  		}
>=20
>  		nb_rx +=3D rxa_eth_rx(rx_adapter, d, qid, nb_rx, max_nb_rx,
> -				NULL, buf);
> +				NULL, buf, stats);
>  		if (nb_rx > max_nb_rx) {
>  			rx_adapter->wrr_pos =3D
>  				    (wrr_pos + 1) % rx_adapter->wrr_len;
> @@ -1301,7 +1314,6 @@ rxa_poll(struct event_eth_rx_adapter *rx_adapter)
>  		if (++wrr_pos =3D=3D rx_adapter->wrr_len)
>  			wrr_pos =3D 0;
>  	}
> -	return nb_rx;
>  }
>=20
>  static void
> @@ -1309,12 +1321,13 @@ rxa_vector_expire(struct eth_rx_vector_data *vec,=
 void *arg)
>  {
>  	struct event_eth_rx_adapter *rx_adapter =3D arg;
>  	struct eth_event_enqueue_buffer *buf =3D NULL;
> +	struct rte_event_eth_rx_adapter_stats *stats =3D NULL;
>  	struct rte_event *ev;
>=20
> -	buf =3D rxa_event_buf_get(rx_adapter, vec->port, vec->queue);
> +	buf =3D rxa_event_buf_get(rx_adapter, vec->port, vec->queue, &stats);
>=20
>  	if (buf->count)
> -		rxa_flush_event_buffer(rx_adapter, buf);
> +		rxa_flush_event_buffer(rx_adapter, buf, stats);
>=20
>  	if (vec->vector_ev->nb_elem =3D=3D 0)
>  		return;
> @@ -1333,7 +1346,6 @@ static int
>  rxa_service_func(void *args)
>  {
>  	struct event_eth_rx_adapter *rx_adapter =3D args;
> -	struct rte_event_eth_rx_adapter_stats *stats;
>=20
>  	if (rte_spinlock_trylock(&rx_adapter->rx_lock) =3D=3D 0)
>  		return 0;
> @@ -1360,10 +1372,11 @@ rxa_service_func(void *args)
>  		}
>  	}
>=20
> -	stats =3D &rx_adapter->stats;
> -	stats->rx_packets +=3D rxa_intr_ring_dequeue(rx_adapter);
> -	stats->rx_packets +=3D rxa_poll(rx_adapter);
> +	rxa_intr_ring_dequeue(rx_adapter);
> +	rxa_poll(rx_adapter);
> +
>  	rte_spinlock_unlock(&rx_adapter->rx_lock);
> +
>  	return 0;
>  }
>=20
> @@ -1937,9 +1950,13 @@ rxa_sw_del(struct event_eth_rx_adapter *rx_adapter=
,
>  	if (rx_adapter->use_queue_event_buf) {
>  		struct eth_event_enqueue_buffer *event_buf =3D
>  			dev_info->rx_queue[rx_queue_id].event_buf;
> +		struct rte_event_eth_rx_adapter_stats *stats =3D
> +			dev_info->rx_queue[rx_queue_id].stats;
>  		rte_free(event_buf->events);
>  		rte_free(event_buf);
> +		rte_free(stats);
>  		dev_info->rx_queue[rx_queue_id].event_buf =3D NULL;
> +		dev_info->rx_queue[rx_queue_id].stats =3D NULL;
>  	}
>  }
>=20
> @@ -1955,6 +1972,7 @@ rxa_add_queue(struct event_eth_rx_adapter *rx_adapt=
er,
>  	int sintrq;
>  	struct rte_event *qi_ev;
>  	struct eth_event_enqueue_buffer *new_rx_buf =3D NULL;
> +	struct rte_event_eth_rx_adapter_stats *stats =3D NULL;
>  	uint16_t eth_dev_id =3D dev_info->dev->data->port_id;
>  	int ret;
>=20
> @@ -2061,6 +2079,21 @@ rxa_add_queue(struct event_eth_rx_adapter *rx_adap=
ter,
>=20
>  	queue_info->event_buf =3D new_rx_buf;
>=20
> +	/* Allocate storage for adapter queue stats */
> +	stats =3D rte_zmalloc_socket("rx_queue_stats",
> +				sizeof(*stats), 0,
> +				rte_eth_dev_socket_id(eth_dev_id));
> +	if (stats =3D=3D NULL) {
> +		rte_free(new_rx_buf->events);
> +		rte_free(new_rx_buf);
> +		RTE_EDEV_LOG_ERR("Failed to allocate stats storage for"
> +				 " dev_id: %d queue_id: %d",
> +				 eth_dev_id, rx_queue_id);
> +		return -ENOMEM;
> +	}
> +
> +	queue_info->stats =3D stats;
> +
>  	return 0;
>  }
>=20
> @@ -2819,6 +2852,15 @@ rte_event_eth_rx_adapter_stop(uint8_t id)
>  	return rxa_ctrl(id, 0);
>  }
>=20
> +static inline void
> +rxa_queue_stats_reset(struct eth_rx_queue_info *queue_info)
> +{
> +	struct rte_event_eth_rx_adapter_stats *q_stats;
> +
> +	q_stats =3D queue_info->stats;
> +	memset(q_stats, 0, sizeof(*q_stats));
> +}
> +
>  int
>  rte_event_eth_rx_adapter_stats_get(uint8_t id,
>  			       struct rte_event_eth_rx_adapter_stats *stats)
> @@ -2829,7 +2871,9 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
>  	struct rte_event_eth_rx_adapter_stats dev_stats;
>  	struct rte_eventdev *dev;
>  	struct eth_device_info *dev_info;
> -	uint32_t i;
> +	struct eth_rx_queue_info *queue_info;
> +	struct rte_event_eth_rx_adapter_stats *q_stats;
> +	uint32_t i, j;
>  	int ret;
>=20
>  	if (rxa_memzone_lookup())
> @@ -2843,8 +2887,32 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
>=20
>  	dev =3D &rte_eventdevs[rx_adapter->eventdev_id];
>  	memset(stats, 0, sizeof(*stats));
> +
> +	if (rx_adapter->service_inited)
> +		*stats =3D rx_adapter->stats;
> +
>  	RTE_ETH_FOREACH_DEV(i) {
>  		dev_info =3D &rx_adapter->eth_devices[i];
> +
> +		if (rx_adapter->use_queue_event_buf  && dev_info->rx_queue) {

nitpick: extra space between use_queue_event_buf and &&.

> +
> +			for (j =3D 0; j < dev_info->dev->data->nb_rx_queues;
> +						j++) {

nitpick: align this line to "j =3D 0"

Rest of the patch set looks good to me.

With these changes, you can add my ack.

> +				queue_info =3D &dev_info->rx_queue[j];
> +				if (!queue_info->queue_enabled)
> +					continue;
> +				q_stats =3D queue_info->stats;
> +
> +				stats->rx_packets +=3D q_stats->rx_packets;
> +				stats->rx_poll_count +=3D q_stats->rx_poll_count;
> +				stats->rx_enq_count +=3D q_stats->rx_enq_count;
> +				stats->rx_enq_retry +=3D q_stats->rx_enq_retry;
> +				stats->rx_dropped +=3D q_stats->rx_dropped;
> +				stats->rx_enq_block_cycles +=3D
> +						q_stats->rx_enq_block_cycles;
> +			}
> +		}
> +
>  		if (dev_info->internal_event_port =3D=3D 0 ||
>  			dev->dev_ops->eth_rx_adapter_stats_get =3D=3D NULL)
>  			continue;
> @@ -2857,19 +2925,69 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
>  		dev_stats_sum.rx_enq_count +=3D dev_stats.rx_enq_count;
>  	}
>=20
> -	if (rx_adapter->service_inited)
> -		*stats =3D rx_adapter->stats;
> -
> +	buf =3D &rx_adapter->event_enqueue_buffer;
>  	stats->rx_packets +=3D dev_stats_sum.rx_packets;
>  	stats->rx_enq_count +=3D dev_stats_sum.rx_enq_count;
> +	stats->rx_event_buf_count =3D buf->count;
> +	stats->rx_event_buf_size =3D buf->events_size;
>=20
> -	if (!rx_adapter->use_queue_event_buf) {
> -		buf =3D &rx_adapter->event_enqueue_buffer;
> -		stats->rx_event_buf_count =3D buf->count;
> -		stats->rx_event_buf_size =3D buf->events_size;
> -	} else {
> -		stats->rx_event_buf_count =3D 0;
> -		stats->rx_event_buf_size =3D 0;
> +	return 0;
> +}
> +
> +int
> +rte_event_eth_rx_adapter_queue_stats_get(uint8_t id,
> +		uint16_t eth_dev_id,
> +		uint16_t rx_queue_id,
> +		struct rte_event_eth_rx_adapter_queue_stats *stats)
> +{
> +	struct event_eth_rx_adapter *rx_adapter;
> +	struct eth_device_info *dev_info;
> +	struct eth_rx_queue_info *queue_info;
> +	struct eth_event_enqueue_buffer *event_buf;
> +	struct rte_event_eth_rx_adapter_stats *q_stats;
> +	struct rte_eventdev *dev;
> +
> +	if (rxa_memzone_lookup())
> +		return -ENOMEM;
> +
> +	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
> +
> +	rx_adapter =3D rxa_id_to_adapter(id);
> +
> +	if (rx_adapter =3D=3D NULL || stats =3D=3D NULL)
> +		return -EINVAL;
> +
> +	if (!rx_adapter->use_queue_event_buf)
> +		return -EINVAL;
> +
> +	if (rx_queue_id >=3D rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
> +		RTE_EDEV_LOG_ERR("Invalid rx queue_id %" PRIu16, rx_queue_id);
> +		return -EINVAL;
> +	}
> +
> +	dev_info =3D &rx_adapter->eth_devices[eth_dev_id];
> +	if (dev_info->rx_queue =3D=3D NULL ||
> +	    !dev_info->rx_queue[rx_queue_id].queue_enabled) {
> +		RTE_EDEV_LOG_ERR("Rx queue %u not added", rx_queue_id);
> +		return -EINVAL;
> +	}
> +
> +	queue_info =3D &dev_info->rx_queue[rx_queue_id];
> +	event_buf =3D queue_info->event_buf;
> +	q_stats =3D queue_info->stats;
> +
> +	stats->rx_event_buf_count =3D event_buf->count;
> +	stats->rx_event_buf_size =3D event_buf->events_size;
> +	stats->rx_packets =3D q_stats->rx_packets;
> +	stats->rx_poll_count =3D q_stats->rx_poll_count;
> +	stats->rx_dropped =3D q_stats->rx_dropped;
> +
> +	dev =3D &rte_eventdevs[rx_adapter->eventdev_id];
> +	if (dev->dev_ops->eth_rx_adapter_queue_stats_get !=3D NULL) {
> +		return (*dev->dev_ops->eth_rx_adapter_queue_stats_get)(dev,
> +						&rte_eth_devices[eth_dev_id],
> +						rx_queue_id, stats);
>  	}
>=20
>  	return 0;
> @@ -2881,7 +2999,8 @@ rte_event_eth_rx_adapter_stats_reset(uint8_t id)
>  	struct event_eth_rx_adapter *rx_adapter;
>  	struct rte_eventdev *dev;
>  	struct eth_device_info *dev_info;
> -	uint32_t i;
> +	struct eth_rx_queue_info *queue_info;
> +	uint32_t i, j;
>=20
>  	if (rxa_memzone_lookup())
>  		return -ENOMEM;
> @@ -2893,8 +3012,21 @@ rte_event_eth_rx_adapter_stats_reset(uint8_t id)
>  		return -EINVAL;
>=20
>  	dev =3D &rte_eventdevs[rx_adapter->eventdev_id];
> +
>  	RTE_ETH_FOREACH_DEV(i) {
>  		dev_info =3D &rx_adapter->eth_devices[i];
> +
> +		if (rx_adapter->use_queue_event_buf  && dev_info->rx_queue) {
> +
> +			for (j =3D 0; j < dev_info->dev->data->nb_rx_queues;
> +						j++) {
> +				queue_info =3D &dev_info->rx_queue[j];
> +				if (!queue_info->queue_enabled)
> +					continue;
> +				rxa_queue_stats_reset(queue_info);
> +			}
> +		}
> +
>  		if (dev_info->internal_event_port =3D=3D 0 ||
>  			dev->dev_ops->eth_rx_adapter_stats_reset =3D=3D NULL)
>  			continue;
> @@ -2903,6 +3035,56 @@ rte_event_eth_rx_adapter_stats_reset(uint8_t id)
>  	}
>=20
>  	memset(&rx_adapter->stats, 0, sizeof(rx_adapter->stats));
> +
> +	return 0;
> +}
> +
> +int
> +rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
> +		uint16_t eth_dev_id,
> +		uint16_t rx_queue_id)
> +{
> +	struct event_eth_rx_adapter *rx_adapter;
> +	struct eth_device_info *dev_info;
> +	struct eth_rx_queue_info *queue_info;
> +	struct rte_eventdev *dev;
> +
> +	if (rxa_memzone_lookup())
> +		return -ENOMEM;
> +
> +	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
> +
> +	rx_adapter =3D rxa_id_to_adapter(id);
> +	if (rx_adapter =3D=3D NULL)
> +		return -EINVAL;
> +
> +	if (!rx_adapter->use_queue_event_buf)
> +		return -EINVAL;
> +
> +	if (rx_queue_id >=3D rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
> +		RTE_EDEV_LOG_ERR("Invalid rx queue_id %" PRIu16, rx_queue_id);
> +		return -EINVAL;
> +	}
> +
> +	dev_info =3D &rx_adapter->eth_devices[eth_dev_id];
> +
> +	if (dev_info->rx_queue =3D=3D NULL ||
> +	    !dev_info->rx_queue[rx_queue_id].queue_enabled) {
> +		RTE_EDEV_LOG_ERR("Rx queue %u not added", rx_queue_id);
> +		return -EINVAL;
> +	}
> +
> +	queue_info =3D &dev_info->rx_queue[rx_queue_id];
> +	rxa_queue_stats_reset(queue_info);
> +
> +	dev =3D &rte_eventdevs[rx_adapter->eventdev_id];
> +	if (dev->dev_ops->eth_rx_adapter_queue_stats_reset !=3D NULL) {
> +		return (*dev->dev_ops->eth_rx_adapter_queue_stats_reset)(dev,
> +						&rte_eth_devices[eth_dev_id],
> +						rx_queue_id);
> +	}
> +
>  	return 0;
>  }
>=20
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_e=
vent_eth_rx_adapter.h
> index ab625f7273..9546d792e9 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> @@ -35,6 +35,8 @@
>   *  - rte_event_eth_rx_adapter_stats_get()
>   *  - rte_event_eth_rx_adapter_stats_reset()
>   *  - rte_event_eth_rx_adapter_queue_conf_get()
> + *  - rte_event_eth_rx_adapter_queue_stats_get()
> + *  - rte_event_eth_rx_adapter_queue_stats_reset()
>   *
>   * The application creates an ethernet to event adapter using
>   * rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_cre=
ate()
> @@ -204,6 +206,23 @@ struct rte_event_eth_rx_adapter_queue_conf {
>  	/**< event buffer size for this queue */
>  };
>=20
> +/**
> + * A structure used to retrieve statistics for an
> + * eth rx adapter queue.
> + */
> +struct rte_event_eth_rx_adapter_queue_stats {
> +	uint64_t rx_event_buf_count;
> +	/**< Rx event buffered count */
> +	uint64_t rx_event_buf_size;
> +	/**< Rx event buffer size */
> +	uint64_t rx_poll_count;
> +	/**< Receive queue poll count */
> +	uint64_t rx_packets;
> +	/**< Received packet count */
> +	uint64_t rx_dropped;
> +	/**< Received packet dropped count */
> +};
> +
>  /**
>   * A structure used to retrieve statistics for an eth rx adapter instanc=
e.
>   */
> @@ -617,6 +636,53 @@ int rte_event_eth_rx_adapter_queue_conf_get(uint8_t =
id,
>  			uint16_t rx_queue_id,
>  			struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
>=20
> +/**
> + * Retrieve Rx queue statistics.
> + *
> + * @param id
> + *  Adapter identifier.
> + *
> + * @param eth_dev_id
> + *  Port identifier of Ethernet device.
> + *
> + * @param rx_queue_id
> + *  Ethernet device receive queue index.
> + *
> + * @param[out] stats
> + *  Pointer to struct rte_event_eth_rx_adapter_queue_stats
> + *
> + * @return
> + *  - 0: Success, queue buffer stats retrieved.
> + *  - <0: Error code on failure.
> + */
> +__rte_experimental
> +int
> +rte_event_eth_rx_adapter_queue_stats_get(uint8_t id,
> +		uint16_t eth_dev_id,
> +		uint16_t rx_queue_id,
> +		struct rte_event_eth_rx_adapter_queue_stats *stats);
> +
> +/**
> + * Reset Rx queue statistics.
> + *
> + * @param id
> + *  Adapter identifier.
> + *
> + * @param eth_dev_id
> + *  Port identifier of Ethernet device.
> + *
> + * @param rx_queue_id
> + *  Ethernet device receive queue index.
> + *
> + * @return
> + *  - 0: Success, queue buffer stats retrieved.
> + *  - <0: Error code on failure.
> + */
> +__rte_experimental
> +int
> +rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
> +		uint16_t eth_dev_id,
> +		uint16_t rx_queue_id);
>=20
>  #ifdef __cplusplus
>  }
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index cd37164141..ade1f1182e 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -103,6 +103,8 @@ EXPERIMENTAL {
>  	# added in 21.11
>  	rte_event_eth_rx_adapter_create_with_params;
>  	rte_event_eth_rx_adapter_queue_conf_get;
> +	rte_event_eth_rx_adapter_queue_stats_get;
> +	rte_event_eth_rx_adapter_queue_stats_reset;
>  };
>=20
>  INTERNAL {
> --
> 2.25.1