From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA00FA04C1; Thu, 17 Sep 2020 22:58:48 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6F6AA1D6DE; Thu, 17 Sep 2020 22:58:47 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 2628D1D6A6 for ; Thu, 17 Sep 2020 22:58:44 +0200 (CEST) IronPort-SDR: IxVRNp4OfsH2nRKkt6wvEmWeYFEr67UhhiXDmA1LKwb6vvjY8EJ6+Bi9+i1eR6Rivq2NJcG3lq 288S7VczvsBg== X-IronPort-AV: E=McAfee;i="6000,8403,9747"; a="139798008" X-IronPort-AV: E=Sophos;i="5.77,272,1596524400"; d="scan'208";a="139798008" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2020 13:58:41 -0700 IronPort-SDR: zM+PzfY1SlWwunDhis5arU8JsqVtFRRaE+iuehdwKg6vPt6yb14T9AUEYBzyp425T0QEyvcY2N BcVI+wBUitNQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,272,1596524400"; d="scan'208";a="289112736" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by fmsmga008.fm.intel.com with ESMTP; 17 Sep 2020 13:58:40 -0700 Received: from orsmsx601.amr.corp.intel.com (10.22.229.14) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Thu, 17 Sep 2020 13:58:39 -0700 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5 via Frontend Transport; Thu, 17 Sep 2020 13:58:39 -0700 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (104.47.58.175) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.1713.5; Thu, 17 Sep 2020 13:58:29 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kXkanMJ4U0iS0d7aBpClHEKJHVYUSG0nz8/jECmRwDPF8zQHkDIdf7QblbQZOEtOnCU0SllLo+50Bbji9wr15TIKWvqFozdjAIaQphzLlUpDdADq8cm+r/ZpE+xY3SuJ1HfOHkgDgzybvwHuy73xG/GpFy3QL0DEGXwGP2Dh3weXZh9wcBC2lbt1MpNne7Mv4jeF/KDCAgYRo2xEsoVOPC8AyyHF6LmvQmphVM1wT7NgcCffU5Epc7nloYgoLQPNlw8gSB0U9bjYioK9lVr2b3ps3fHrM+Fdd0Y3TRXf2WOeOF0I51k1XDXtbimX9Cf1cdhpXsHfQdUf95ZYKWOZjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=R3duIpGAeKkVWCFZ/TCQR+1xPls2QDzQgpeCNZtrIuY=; b=B3FWOKP1RqKUrJPqWoQg2TZwXIjCpJ7MHLlwaCA2xJFQURUW/iYk8zA0DjaH5VqvihGn5u5FpStj0n6Zb0JNPIF2uOivQz3a/u1KYgH2GsLSFgs3LNgUQvIRpQfNCHDxpafYfDRuOhpau7bzca8yY2F+yMdcUymBS3zHwYTMZ9Mi4AMjf3laF4A7AkUqAEQHMVUvktl2FV4tCvgEXUMYlkguRn83OLDfBVnRaUvw5fp19sb34HsWTwRZ95ctASz7NOkeQEZcKkco0HE3o0yLa7/TLQUhDSMUahNBRBSCcAMS+LaBzxDKAyk7CDqUm7Lb3DzZYfyB2rtWF2GiQ+jyZw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=R3duIpGAeKkVWCFZ/TCQR+1xPls2QDzQgpeCNZtrIuY=; b=FNEzCuaGZxxfBa36pcpLZjcsmSMWUKRQuvrNCq/7UCZQBu2oZVioiKV4VV12cl4hQPEM//gqrvaDr2qEVaQzc0adb7UwywxfwGsZV/7FyvU0+xO15fASBwlE+ZrujHKiQkxc+FEhLzDfRfRDdtFVgQYAdzBg1n0+SG+W+Bfyiw0= Received: from MN2PR11MB4431.namprd11.prod.outlook.com (2603:10b6:208:18c::15) by MN2PR11MB4511.namprd11.prod.outlook.com (2603:10b6:208:189::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Thu, 17 Sep 2020 20:58:21 +0000 Received: from MN2PR11MB4431.namprd11.prod.outlook.com ([fe80::4cf6:9040:b290:30d8]) by MN2PR11MB4431.namprd11.prod.outlook.com ([fe80::4cf6:9040:b290:30d8%6]) with mapi id 15.20.3370.019; Thu, 17 Sep 2020 20:58:20 +0000 From: "Chen, Mike Ximing" To: "McDaniel, Timothy" CC: "dev@dpdk.org" , "Carrillo, Erik G" , "Eads, Gage" , "Van Haaren, Harry" , "jerinj@marvell.com" Thread-Topic: [dpdk-dev] [PATCH 07/22] event/dlb2: add xstats Thread-Index: AQHWiHqLTcxGINdRWEauHnwRu2CX/6ltVpiA Date: Thu, 17 Sep 2020 20:58:20 +0000 Message-ID: References: <1599855987-25976-1-git-send-email-timothy.mcdaniel@intel.com> <1599855987-25976-8-git-send-email-timothy.mcdaniel@intel.com> In-Reply-To: <1599855987-25976-8-git-send-email-timothy.mcdaniel@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-version: 11.5.1.3 dlp-reaction: no-action dlp-product: dlpe-windows authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [69.141.167.85] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: a8c4826e-017f-4fe7-4d9a-08d85b4c68fd x-ms-traffictypediagnostic: MN2PR11MB4511: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:1417; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: Mk0QwKO+wf02ZyQ/5v/r1bJCJIc2oCxB3CtNDizGlti2nR4spe866/ETgOW1Cot3OrsC0TJHwTxq5DKzhfyFV9rrT3qpBZvP8djKO5vfyD76ITntyqMIYV65E8a5ZyklMLZnBzSdVPmSaenk/QjzFkvmlYovUtrlDiRWjkzFx2DFlJcE8/jojBuCYofL/BBMj8P3eMYjCYDE1E06R4qtJWDWksYn5Z6R36Y7gzokDbEW4gGQ3wWzr+dkV6AVNVbeoVPuXcu+zI0jvRhvfqcwLEpS15q1U9E7/ytMHfXWi1ovPYXtZ6m1iMoXyyzfrXPsmv0VozryAdznHj59kIuVwSbq3DgKrWMj+GlLM411CQ+tbmAwD9aiySi5EyDs7sbSCno/2klybLeKXTL3Adj+hjWd6VqU2uXcdRViMwsjjVJ5lLDiFjX4OfgByO6S4vW1botHUYJFpCvRiFIpHFOYNQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MN2PR11MB4431.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(39860400002)(376002)(346002)(396003)(136003)(366004)(71200400001)(2906002)(186003)(86362001)(55016002)(6636002)(26005)(8676002)(66946007)(7696005)(6862004)(66556008)(66476007)(30864003)(52536014)(64756008)(66446008)(9686003)(5660300002)(8936002)(4326008)(83380400001)(33656002)(76116006)(316002)(966005)(478600001)(54906003)(6506007)(559001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: 83BWKG1g9iuw54SNJfN45ML4rj5p7FAKbs0PzF1vF3ct2Rp7wvrQZiTqQef2BHNoN7XFnaueCUDv4Ucx8cvGBc4gRL3v1V2jxTdCunh8YnoQxngbC8CwBv8mLfLeeoq1muTm7S1b6fndeXrsKv25/1XNQWjmJKUdaRX+jmDzR3iEyPRoyJ25Cxo7fiOqIEHPfczURQ3QHpma3dxI7n/ayLeRvgj3oOkwQGUhYXssbk/cVP09OUWNlI1/OXDeXBuxLuic/g81cuH6rz4uKgc98ToAYhkLwKkxeuNU6mm8ZxDUrA9kHaxyi9EHbpnKagQWGtp96HzwaStbsjlfKyVSs4KWNcqGWhImysG3OBQhTZix4cEq/qAeH9ai0/fAYvBEsgGYEsLtRn/EGWqzxGyxVxjPO8nORM+3Rm8pPICfIirF23yhkDcOR+freK/kSahXjiuJLtJn/7TRgqhupEWP6ZyFo1zJ4Ag4nbtODZgEuauXQb8ZTfXhvl8YqpbpcLVO1Y6cUy1+iyNiaS7yAVS/FWS63U7EBypamou3KMVtYIMknFUpPou72hILaWRjkExUp3tA4SCzNHlJgyGnKDdOgi0CEvcIX0cJWfJumG89q8sJ7J+/le9j+hmMNWkl7SY9bluhcrILggPi6FUKS5qYzg== Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MN2PR11MB4431.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: a8c4826e-017f-4fe7-4d9a-08d85b4c68fd X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Sep 2020 20:58:20.4481 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 9pnTpegb9o9foLHwCWv1fViTanxD6YmlDBGPrJ9CbdrJYshwuvNv8r4lBBzUcfd/sxwHRwmFqy2h0gRB5e+ezjdDD5/RvEbmdq8hzp6nnHk= X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR11MB4511 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH 07/22] event/dlb2: add xstats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > +dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f) { > + struct dlb2_eventdev *dlb2; > + struct dlb2_hw_dev *handle; > + int i; > + > + if (!f) { > + printf("Invalid file pointer\n"); > + return; > + } > + > + if (!dev) { > + fprintf(f, "Invalid event device\n"); > + return; > + } > + > + dlb2 =3D dlb2_pmd_priv(dev); > + > + if (!dlb2) { > + fprintf(f, "DLB2 Event device cannot be dumped!\n"); > + return; > + } > + Not sure if this is enforced. The DPDK coding style discourages using ! on = pointers ( see section 1. 8.1 at https://doc.dpdk.org/guides/contributing/c= oding_style.html).=20 Reviewed-by: Mike Ximing Chen > -----Original Message----- > From: dev On Behalf Of Timothy McDaniel > Sent: Friday, September 11, 2020 4:26 PM > Cc: dev@dpdk.org; Carrillo, Erik G ; Eads, Gag= e > ; Van Haaren, Harry ; > jerinj@marvell.com > Subject: [dpdk-dev] [PATCH 07/22] event/dlb2: add xstats >=20 > Add support for DLB2 xstats. Perform initialization and add standard xst= ats entry > points. >=20 > Signed-off-by: Timothy McDaniel > --- > drivers/event/dlb2/dlb2.c | 35 +- > drivers/event/dlb2/dlb2_xstats.c | 1269 > ++++++++++++++++++++++++++++++++++++++ > drivers/event/dlb2/meson.build | 1 + > 3 files changed, 1302 insertions(+), 3 deletions(-) create mode 100644 > drivers/event/dlb2/dlb2_xstats.c >=20 > diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index > 7ff7dac..0d6fea4 100644 > --- a/drivers/event/dlb2/dlb2.c > +++ b/drivers/event/dlb2/dlb2.c > @@ -77,6 +77,21 @@ static struct dlb2_port_low_level_io_functions > qm_mmio_fns; struct process_local_port_data > dlb2_port[DLB2_MAX_NUM_PORTS][DLB2_NUM_PORT_TYPES]; >=20 > +/* > + * DUMMY - added so that xstats path will compile/link. > + * Will be replaced by real version in a subsequent > + * patch. > + */ > +uint32_t > +dlb2_get_queue_depth(struct dlb2_eventdev *dlb2, > + struct dlb2_eventdev_queue *queue) { > + RTE_SET_USED(dlb2); > + RTE_SET_USED(queue); > + > + return 0; > +} > + > /* override defaults with value(s) provided on command line */ static v= oid > dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2, @@ -353,9 > +368,16 @@ set_qid_depth_thresh(const char *key __rte_unused, static voi= d > dlb2_entry_points_init(struct rte_eventdev *dev) { > - RTE_SET_USED(dev); > - > - /* Eventdev PMD entry points */ > + /* Expose PMD's eventdev interface */ > + static struct rte_eventdev_ops dlb2_eventdev_entry_ops =3D { > + .dump =3D dlb2_eventdev_dump, > + .xstats_get =3D dlb2_eventdev_xstats_get, > + .xstats_get_names =3D dlb2_eventdev_xstats_get_names, > + .xstats_get_by_name =3D dlb2_eventdev_xstats_get_by_name, > + .xstats_reset =3D dlb2_eventdev_xstats_reset, > + }; > + > + dev->dev_ops =3D &dlb2_eventdev_entry_ops; > } >=20 > int > @@ -411,6 +433,13 @@ dlb2_primary_eventdev_probe(struct rte_eventdev > *dev, > return err; > } >=20 > + /* Complete xtstats runtime initialization */ > + err =3D dlb2_xstats_init(dlb2); > + if (err) { > + DLB2_LOG_ERR("dlb2: failed to init xstats, err=3D%d\n", err); > + return err; > + } > + > /* Initialize each port's token pop mode */ > for (i =3D 0; i < DLB2_MAX_NUM_PORTS; i++) > dlb2->ev_ports[i].qm_port.token_pop_mode =3D AUTO_POP; diff > --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats= .c > new file mode 100644 > index 0000000..9a69d78 > --- /dev/null > +++ b/drivers/event/dlb2/dlb2_xstats.c > @@ -0,0 +1,1269 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2016-2020 Intel Corporation */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "dlb2_priv.h" > +#include "dlb2_inline_fns.h" > + > +enum dlb2_xstats_type { > + /* common to device and port */ > + rx_ok, /**< Receive an event */ > + rx_drop, /**< Error bit set in received QE */ > + rx_interrupt_wait, /**< Wait on an interrupt */ > + rx_umonitor_umwait, /**< Block using umwait */ > + tx_ok, /**< Transmit an event */ > + total_polls, /**< Call dequeue_burst */ > + zero_polls, /**< Call dequeue burst and return 0 */ > + tx_nospc_ldb_hw_credits, /**< Insufficient LDB h/w credits */ > + tx_nospc_dir_hw_credits, /**< Insufficient DIR h/w credits */ > + tx_nospc_inflight_max, /**< Reach the new_event_threshold > */ > + tx_nospc_new_event_limit, /**< Insufficient s/w credits */ > + tx_nospc_inflight_credits, /**< Port has too few s/w credits */ > + /* device specific */ > + nb_events_limit, > + inflight_events, > + ldb_pool_size, > + dir_pool_size, > + /* port specific */ > + tx_new, /**< Send an OP_NEW event > */ > + tx_fwd, /**< Send an OP_FORWARD event */ > + tx_rel, /**< Send an OP_RELEASE event */ > + tx_implicit_rel, /**< Issue an implicit event release */ > + tx_sched_ordered, /**< Send a SCHED_TYPE_ORDERED > event */ > + tx_sched_unordered, /**< Send a SCHED_TYPE_PARALLEL > event */ > + tx_sched_atomic, /**< Send a SCHED_TYPE_ATOMIC > event */ > + tx_sched_directed, /**< Send a directed event */ > + tx_invalid, /**< Send an event with an invalid op *= / > + outstanding_releases, /**< # of releases a port owes */ > + max_outstanding_releases, /**< max # of releases a port can owe > */ > + rx_sched_ordered, /**< Dequeue an ordered event */ > + rx_sched_unordered, /**< Dequeue an unordered event */ > + rx_sched_atomic, /**< Dequeue an atomic event */ > + rx_sched_directed, /**< Dequeue an directed event */ > + rx_sched_invalid, /**< Dequeue event sched type invalid *= / > + /* common to port and queue */ > + is_configured, /**< Port is configured */ > + is_load_balanced, /**< Port is LDB */ > + hw_id, /**< Hardware ID */ > + /* queue specific */ > + num_links, /**< Number of ports linked */ > + sched_type, /**< Queue sched type */ > + enq_ok, /**< # events enqueued to the > queue */ > + current_depth, /**< Current queue depth */ > + depth_threshold, /**< Programmed depth threshold */ > + depth_le50_threshold, > + /**< Depth LE to 50% of the configured hardware threshold */ > + depth_gt50_le75_threshold, > + /**< Depth GT 50%, but LE to 75% of the configured hardware threshold > */ > + depth_gt75_le100_threshold, > + /**< Depth GT 75%. but LE to the configured hardware threshold */ > + depth_gt100_threshold > + /**< Depth GT 100% of the configured hw threshold */ }; > + > +typedef uint64_t (*dlb2_xstats_fn)(struct dlb2_eventdev *dlb2, > + uint16_t obj_idx, /* port or queue id */ > + enum dlb2_xstats_type stat, int extra_arg); > + > +enum dlb2_xstats_fn_type { > + DLB2_XSTATS_FN_DEV, > + DLB2_XSTATS_FN_PORT, > + DLB2_XSTATS_FN_QUEUE > +}; > + > +struct dlb2_xstats_entry { > + struct rte_event_dev_xstats_name name; > + uint64_t reset_value; /* an offset to be taken away to emulate resets *= / > + enum dlb2_xstats_fn_type fn_id; > + enum dlb2_xstats_type stat; > + enum rte_event_dev_xstats_mode mode; > + int extra_arg; > + uint16_t obj_idx; > + uint8_t reset_allowed; /* when set, this value can be reset */ }; > + > +/* Some device stats are simply a summation of the corresponding port > +values */ static uint64_t dlb2_device_traffic_stat_get(struct > +dlb2_eventdev *dlb2, > + int which_stat) > +{ > + int i; > + uint64_t val =3D 0; > + > + for (i =3D 0; i < DLB2_MAX_NUM_PORTS; i++) { > + struct dlb2_eventdev_port *port =3D &dlb2->ev_ports[i]; > + > + if (!port->setup_done) > + continue; > + > + switch (which_stat) { > + case rx_ok: > + val +=3D port->stats.traffic.rx_ok; > + break; > + case rx_drop: > + val +=3D port->stats.traffic.rx_drop; > + break; > + case rx_interrupt_wait: > + val +=3D port->stats.traffic.rx_interrupt_wait; > + break; > + case rx_umonitor_umwait: > + val +=3D port->stats.traffic.rx_umonitor_umwait; > + break; > + case tx_ok: > + val +=3D port->stats.traffic.tx_ok; > + break; > + case total_polls: > + val +=3D port->stats.traffic.total_polls; > + break; > + case zero_polls: > + val +=3D port->stats.traffic.zero_polls; > + break; > + case tx_nospc_ldb_hw_credits: > + val +=3D port->stats.traffic.tx_nospc_ldb_hw_credits; > + break; > + case tx_nospc_dir_hw_credits: > + val +=3D port->stats.traffic.tx_nospc_dir_hw_credits; > + break; > + case tx_nospc_inflight_max: > + val +=3D port->stats.traffic.tx_nospc_inflight_max; > + break; > + case tx_nospc_new_event_limit: > + val +=3D port->stats.traffic.tx_nospc_new_event_limit; > + break; > + case tx_nospc_inflight_credits: > + val +=3D port->stats.traffic.tx_nospc_inflight_credits; > + break; > + default: > + return -1; > + } > + } > + return val; > +} > + > +static uint64_t > +get_dev_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx __rte_unused, > + enum dlb2_xstats_type type, int extra_arg __rte_unused) { > + switch (type) { > + case rx_ok: > + case rx_drop: > + case rx_interrupt_wait: > + case rx_umonitor_umwait: > + case tx_ok: > + case total_polls: > + case zero_polls: > + case tx_nospc_ldb_hw_credits: > + case tx_nospc_dir_hw_credits: > + case tx_nospc_inflight_max: > + case tx_nospc_new_event_limit: > + case tx_nospc_inflight_credits: > + return dlb2_device_traffic_stat_get(dlb2, type); > + case nb_events_limit: > + return dlb2->new_event_limit; > + case inflight_events: > + return __atomic_load_n(&dlb2->inflights, __ATOMIC_SEQ_CST); > + case ldb_pool_size: > + return dlb2->num_ldb_credits; > + case dir_pool_size: > + return dlb2->num_dir_credits; > + default: return -1; > + } > +} > + > +static uint64_t > +get_port_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx, > + enum dlb2_xstats_type type, int extra_arg __rte_unused) { > + struct dlb2_eventdev_port *ev_port =3D &dlb2->ev_ports[obj_idx]; > + > + switch (type) { > + case rx_ok: return ev_port->stats.traffic.rx_ok; > + > + case rx_drop: return ev_port->stats.traffic.rx_drop; > + > + case rx_interrupt_wait: return > +ev_port->stats.traffic.rx_interrupt_wait; > + > + case rx_umonitor_umwait: > + return ev_port->stats.traffic.rx_umonitor_umwait; > + > + case tx_ok: return ev_port->stats.traffic.tx_ok; > + > + case total_polls: return ev_port->stats.traffic.total_polls; > + > + case zero_polls: return ev_port->stats.traffic.zero_polls; > + > + case tx_nospc_ldb_hw_credits: > + return ev_port->stats.traffic.tx_nospc_ldb_hw_credits; > + > + case tx_nospc_dir_hw_credits: > + return ev_port->stats.traffic.tx_nospc_dir_hw_credits; > + > + case tx_nospc_inflight_max: > + return ev_port->stats.traffic.tx_nospc_inflight_max; > + > + case tx_nospc_new_event_limit: > + return ev_port->stats.traffic.tx_nospc_new_event_limit; > + > + case tx_nospc_inflight_credits: > + return ev_port->stats.traffic.tx_nospc_inflight_credits; > + > + case is_configured: return ev_port->setup_done; > + > + case is_load_balanced: return !ev_port->qm_port.is_directed; > + > + case hw_id: return ev_port->qm_port.id; > + > + case tx_new: return ev_port->stats.tx_op_cnt[RTE_EVENT_OP_NEW]; > + > + case tx_fwd: return ev_port- > >stats.tx_op_cnt[RTE_EVENT_OP_FORWARD]; > + > + case tx_rel: return ev_port->stats.tx_op_cnt[RTE_EVENT_OP_RELEASE]; > + > + case tx_implicit_rel: return ev_port->stats.tx_implicit_rel; > + > + case tx_sched_ordered: > + return ev_port->stats.tx_sched_cnt[DLB2_SCHED_ORDERED]; > + > + case tx_sched_unordered: > + return ev_port- > >stats.tx_sched_cnt[DLB2_SCHED_UNORDERED]; > + > + case tx_sched_atomic: > + return ev_port->stats.tx_sched_cnt[DLB2_SCHED_ATOMIC]; > + > + case tx_sched_directed: > + return ev_port->stats.tx_sched_cnt[DLB2_SCHED_DIRECTED]; > + > + case tx_invalid: return ev_port->stats.tx_invalid; > + > + case outstanding_releases: return ev_port->outstanding_releases; > + > + case max_outstanding_releases: > + return DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT; > + > + case rx_sched_ordered: > + return ev_port->stats.rx_sched_cnt[DLB2_SCHED_ORDERED]; > + > + case rx_sched_unordered: > + return ev_port- > >stats.rx_sched_cnt[DLB2_SCHED_UNORDERED]; > + > + case rx_sched_atomic: > + return ev_port->stats.rx_sched_cnt[DLB2_SCHED_ATOMIC]; > + > + case rx_sched_directed: > + return ev_port->stats.rx_sched_cnt[DLB2_SCHED_DIRECTED]; > + > + case rx_sched_invalid: return ev_port->stats.rx_sched_invalid; > + > + default: return -1; > + } > +} > + > +static uint64_t > +dlb2_get_threshold_stat(struct dlb2_eventdev *dlb2, int qid, int stat) > +{ > + int port =3D 0; > + uint64_t tally =3D 0; > + > + for (port =3D 0; port < DLB2_MAX_NUM_PORTS; port++) > + tally +=3D dlb2->ev_ports[port].stats.queue[qid].qid_depth[stat]; > + > + return tally; > +} > + > +static uint64_t > +dlb2_get_enq_ok_stat(struct dlb2_eventdev *dlb2, int qid) { > + int port =3D 0; > + uint64_t enq_ok_tally =3D 0; > + > + for (port =3D 0; port < DLB2_MAX_NUM_PORTS; port++) > + enq_ok_tally +=3D dlb2->ev_ports[port].stats.queue[qid].enq_ok; > + > + return enq_ok_tally; > +} > + > +static uint64_t > +get_queue_stat(struct dlb2_eventdev *dlb2, uint16_t obj_idx, > + enum dlb2_xstats_type type, int extra_arg __rte_unused) { > + struct dlb2_eventdev_queue *ev_queue =3D > + &dlb2->ev_queues[obj_idx]; > + > + switch (type) { > + case is_configured: return ev_queue->setup_done; > + > + case is_load_balanced: return !ev_queue->qm_queue.is_directed; > + > + case hw_id: return ev_queue->qm_queue.id; > + > + case num_links: return ev_queue->num_links; > + > + case sched_type: return ev_queue->qm_queue.sched_type; > + > + case enq_ok: return dlb2_get_enq_ok_stat(dlb2, obj_idx); > + > + case current_depth: return dlb2_get_queue_depth(dlb2, ev_queue); > + > + case depth_threshold: return ev_queue->depth_threshold; > + > + case depth_le50_threshold: > + return dlb2_get_threshold_stat(dlb2, ev_queue->id, > + DLB2_QID_DEPTH_LE50); > + > + case depth_gt50_le75_threshold: > + return dlb2_get_threshold_stat(dlb2, ev_queue->id, > + DLB2_QID_DEPTH_GT50_LE75); > + > + case depth_gt75_le100_threshold: > + return dlb2_get_threshold_stat(dlb2, ev_queue->id, > + DLB2_QID_DEPTH_GT75_LE100); > + > + case depth_gt100_threshold: > + return dlb2_get_threshold_stat(dlb2, ev_queue->id, > + DLB2_QID_DEPTH_GT100); > + > + default: return -1; > + } > +} > + > +int > +dlb2_xstats_init(struct dlb2_eventdev *dlb2) { > + /* > + * define the stats names and types. Used to build up the device > + * xstats array > + * There are multiple set of stats: > + * - device-level, > + * - per-port, > + * - per-qid, > + * > + * For each of these sets, we have three parallel arrays, one for the > + * names, the other for the stat type parameter to be passed in the fn > + * call to get that stat. The third array allows resetting or not. > + * All these arrays must be kept in sync > + */ > + static const char * const dev_stats[] =3D { > + "rx_ok", > + "rx_drop", > + "rx_interrupt_wait", > + "rx_umonitor_umwait", > + "tx_ok", > + "total_polls", > + "zero_polls", > + "tx_nospc_ldb_hw_credits", > + "tx_nospc_dir_hw_credits", > + "tx_nospc_inflight_max", > + "tx_nospc_new_event_limit", > + "tx_nospc_inflight_credits", > + "nb_events_limit", > + "inflight_events", > + "ldb_pool_size", > + "dir_pool_size", > + }; > + static const enum dlb2_xstats_type dev_types[] =3D { > + rx_ok, > + rx_drop, > + rx_interrupt_wait, > + rx_umonitor_umwait, > + tx_ok, > + total_polls, > + zero_polls, > + tx_nospc_ldb_hw_credits, > + tx_nospc_dir_hw_credits, > + tx_nospc_inflight_max, > + tx_nospc_new_event_limit, > + tx_nospc_inflight_credits, > + nb_events_limit, > + inflight_events, > + ldb_pool_size, > + dir_pool_size, > + }; > + /* Note: generated device stats are not allowed to be reset. */ > + static const uint8_t dev_reset_allowed[] =3D { > + 0, /* rx_ok */ > + 0, /* rx_drop */ > + 0, /* rx_interrupt_wait */ > + 0, /* rx_umonitor_umwait */ > + 0, /* tx_ok */ > + 0, /* total_polls */ > + 0, /* zero_polls */ > + 0, /* tx_nospc_ldb_hw_credits */ > + 0, /* tx_nospc_dir_hw_credits */ > + 0, /* tx_nospc_inflight_max */ > + 0, /* tx_nospc_new_event_limit */ > + 0, /* tx_nospc_inflight_credits */ > + 0, /* nb_events_limit */ > + 0, /* inflight_events */ > + 0, /* ldb_pool_size */ > + 0, /* dir_pool_size */ > + }; > + static const char * const port_stats[] =3D { > + "is_configured", > + "is_load_balanced", > + "hw_id", > + "rx_ok", > + "rx_drop", > + "rx_interrupt_wait", > + "rx_umonitor_umwait", > + "tx_ok", > + "total_polls", > + "zero_polls", > + "tx_nospc_ldb_hw_credits", > + "tx_nospc_dir_hw_credits", > + "tx_nospc_inflight_max", > + "tx_nospc_new_event_limit", > + "tx_nospc_inflight_credits", > + "tx_new", > + "tx_fwd", > + "tx_rel", > + "tx_implicit_rel", > + "tx_sched_ordered", > + "tx_sched_unordered", > + "tx_sched_atomic", > + "tx_sched_directed", > + "tx_invalid", > + "outstanding_releases", > + "max_outstanding_releases", > + "rx_sched_ordered", > + "rx_sched_unordered", > + "rx_sched_atomic", > + "rx_sched_directed", > + "rx_sched_invalid" > + }; > + static const enum dlb2_xstats_type port_types[] =3D { > + is_configured, > + is_load_balanced, > + hw_id, > + rx_ok, > + rx_drop, > + rx_interrupt_wait, > + rx_umonitor_umwait, > + tx_ok, > + total_polls, > + zero_polls, > + tx_nospc_ldb_hw_credits, > + tx_nospc_dir_hw_credits, > + tx_nospc_inflight_max, > + tx_nospc_new_event_limit, > + tx_nospc_inflight_credits, > + tx_new, > + tx_fwd, > + tx_rel, > + tx_implicit_rel, > + tx_sched_ordered, > + tx_sched_unordered, > + tx_sched_atomic, > + tx_sched_directed, > + tx_invalid, > + outstanding_releases, > + max_outstanding_releases, > + rx_sched_ordered, > + rx_sched_unordered, > + rx_sched_atomic, > + rx_sched_directed, > + rx_sched_invalid > + }; > + static const uint8_t port_reset_allowed[] =3D { > + 0, /* is_configured */ > + 0, /* is_load_balanced */ > + 0, /* hw_id */ > + 1, /* rx_ok */ > + 1, /* rx_drop */ > + 1, /* rx_interrupt_wait */ > + 1, /* rx_umonitor_umwait */ > + 1, /* tx_ok */ > + 1, /* total_polls */ > + 1, /* zero_polls */ > + 1, /* tx_nospc_ldb_hw_credits */ > + 1, /* tx_nospc_dir_hw_credits */ > + 1, /* tx_nospc_inflight_max */ > + 1, /* tx_nospc_new_event_limit */ > + 1, /* tx_nospc_inflight_credits */ > + 1, /* tx_new */ > + 1, /* tx_fwd */ > + 1, /* tx_rel */ > + 1, /* tx_implicit_rel */ > + 1, /* tx_sched_ordered */ > + 1, /* tx_sched_unordered */ > + 1, /* tx_sched_atomic */ > + 1, /* tx_sched_directed */ > + 1, /* tx_invalid */ > + 0, /* outstanding_releases */ > + 0, /* max_outstanding_releases */ > + 1, /* rx_sched_ordered */ > + 1, /* rx_sched_unordered */ > + 1, /* rx_sched_atomic */ > + 1, /* rx_sched_directed */ > + 1 /* rx_sched_invalid */ > + }; > + > + /* QID specific stats */ > + static const char * const qid_stats[] =3D { > + "is_configured", > + "is_load_balanced", > + "hw_id", > + "num_links", > + "sched_type", > + "enq_ok", > + "current_depth", > + "depth_threshold", > + "depth_le50_threshold", > + "depth_gt50_le75_threshold", > + "depth_gt75_le100_threshold", > + "depth_gt100_threshold", > + }; > + static const enum dlb2_xstats_type qid_types[] =3D { > + is_configured, > + is_load_balanced, > + hw_id, > + num_links, > + sched_type, > + enq_ok, > + current_depth, > + depth_threshold, > + depth_le50_threshold, > + depth_gt50_le75_threshold, > + depth_gt75_le100_threshold, > + depth_gt100_threshold, > + }; > + static const uint8_t qid_reset_allowed[] =3D { > + 0, /* is_configured */ > + 0, /* is_load_balanced */ > + 0, /* hw_id */ > + 0, /* num_links */ > + 0, /* sched_type */ > + 1, /* enq_ok */ > + 0, /* current_depth */ > + 0, /* depth_threshold */ > + 1, /* depth_le50_threshold */ > + 1, /* depth_gt50_le75_threshold */ > + 1, /* depth_gt75_le100_threshold */ > + 1, /* depth_gt100_threshold */ > + }; > + > + /* ---- end of stat definitions ---- */ > + > + /* check sizes, since a missed comma can lead to strings being > + * joined by the compiler. > + */ > + RTE_BUILD_BUG_ON(RTE_DIM(dev_stats) !=3D RTE_DIM(dev_types)); > + RTE_BUILD_BUG_ON(RTE_DIM(port_stats) !=3D RTE_DIM(port_types)); > + RTE_BUILD_BUG_ON(RTE_DIM(qid_stats) !=3D RTE_DIM(qid_types)); > + > + RTE_BUILD_BUG_ON(RTE_DIM(dev_stats) !=3D > RTE_DIM(dev_reset_allowed)); > + RTE_BUILD_BUG_ON(RTE_DIM(port_stats) !=3D > RTE_DIM(port_reset_allowed)); > + RTE_BUILD_BUG_ON(RTE_DIM(qid_stats) !=3D > RTE_DIM(qid_reset_allowed)); > + > + /* other vars */ > + const unsigned int count =3D RTE_DIM(dev_stats) + > + DLB2_MAX_NUM_PORTS * RTE_DIM(port_stats) + > + DLB2_MAX_NUM_QUEUES * RTE_DIM(qid_stats); > + unsigned int i, port, qid, stat_id =3D 0; > + > + dlb2->xstats =3D rte_zmalloc_socket(NULL, > + sizeof(dlb2->xstats[0]) * count, 0, > + dlb2->qm_instance.info.socket_id); > + if (dlb2->xstats =3D=3D NULL) > + return -ENOMEM; > + > +#define sname dlb2->xstats[stat_id].name.name > + for (i =3D 0; i < RTE_DIM(dev_stats); i++, stat_id++) { > + dlb2->xstats[stat_id] =3D (struct dlb2_xstats_entry) { > + .fn_id =3D DLB2_XSTATS_FN_DEV, > + .stat =3D dev_types[i], > + .mode =3D RTE_EVENT_DEV_XSTATS_DEVICE, > + .reset_allowed =3D dev_reset_allowed[i], > + }; > + snprintf(sname, sizeof(sname), "dev_%s", dev_stats[i]); > + } > + dlb2->xstats_count_mode_dev =3D stat_id; > + > + for (port =3D 0; port < DLB2_MAX_NUM_PORTS; port++) { > + dlb2->xstats_offset_for_port[port] =3D stat_id; > + > + uint32_t count_offset =3D stat_id; > + > + for (i =3D 0; i < RTE_DIM(port_stats); i++, stat_id++) { > + dlb2->xstats[stat_id] =3D (struct dlb2_xstats_entry){ > + .fn_id =3D DLB2_XSTATS_FN_PORT, > + .obj_idx =3D port, > + .stat =3D port_types[i], > + .mode =3D RTE_EVENT_DEV_XSTATS_PORT, > + .reset_allowed =3D port_reset_allowed[i], > + }; > + snprintf(sname, sizeof(sname), "port_%u_%s", > + port, port_stats[i]); > + } > + > + dlb2->xstats_count_per_port[port] =3D stat_id - count_offset; > + } > + > + dlb2->xstats_count_mode_port =3D stat_id - dlb2- > >xstats_count_mode_dev; > + > + for (qid =3D 0; qid < DLB2_MAX_NUM_QUEUES; qid++) { > + uint32_t count_offset =3D stat_id; > + > + dlb2->xstats_offset_for_qid[qid] =3D stat_id; > + > + for (i =3D 0; i < RTE_DIM(qid_stats); i++, stat_id++) { > + dlb2->xstats[stat_id] =3D (struct dlb2_xstats_entry){ > + .fn_id =3D DLB2_XSTATS_FN_QUEUE, > + .obj_idx =3D qid, > + .stat =3D qid_types[i], > + .mode =3D RTE_EVENT_DEV_XSTATS_QUEUE, > + .reset_allowed =3D qid_reset_allowed[i], > + }; > + snprintf(sname, sizeof(sname), "qid_%u_%s", > + qid, qid_stats[i]); > + } > + > + dlb2->xstats_count_per_qid[qid] =3D stat_id - count_offset; > + } > + > + dlb2->xstats_count_mode_queue =3D stat_id - > + (dlb2->xstats_count_mode_dev + dlb2- > >xstats_count_mode_port); #undef > +sname > + > + dlb2->xstats_count =3D stat_id; > + > + return 0; > +} > + > +void > +dlb2_xstats_uninit(struct dlb2_eventdev *dlb2) { > + rte_free(dlb2->xstats); > + dlb2->xstats_count =3D 0; > +} > + > +int > +dlb2_eventdev_xstats_get_names(const struct rte_eventdev *dev, > + enum rte_event_dev_xstats_mode mode, uint8_t > queue_port_id, > + struct rte_event_dev_xstats_name *xstats_names, > + unsigned int *ids, unsigned int size) { > + const struct dlb2_eventdev *dlb2 =3D dlb2_pmd_priv(dev); > + unsigned int i; > + unsigned int xidx =3D 0; > + > + RTE_SET_USED(mode); > + RTE_SET_USED(queue_port_id); > + > + uint32_t xstats_mode_count =3D 0; > + uint32_t start_offset =3D 0; > + > + switch (mode) { > + case RTE_EVENT_DEV_XSTATS_DEVICE: > + xstats_mode_count =3D dlb2->xstats_count_mode_dev; > + break; > + case RTE_EVENT_DEV_XSTATS_PORT: > + if (queue_port_id >=3D DLB2_MAX_NUM_PORTS) > + break; > + xstats_mode_count =3D dlb2- > >xstats_count_per_port[queue_port_id]; > + start_offset =3D dlb2->xstats_offset_for_port[queue_port_id]; > + break; > + case RTE_EVENT_DEV_XSTATS_QUEUE: > +#if (DLB2_MAX_NUM_QUEUES <=3D 255) /* max 8 bit value */ > + if (queue_port_id >=3D DLB2_MAX_NUM_QUEUES) > + break; > +#endif > + xstats_mode_count =3D dlb2- > >xstats_count_per_qid[queue_port_id]; > + start_offset =3D dlb2->xstats_offset_for_qid[queue_port_id]; > + break; > + default: > + return -EINVAL; > + }; > + > + if (xstats_mode_count > size || !ids || !xstats_names) > + return xstats_mode_count; > + > + for (i =3D 0; i < dlb2->xstats_count && xidx < size; i++) { > + if (dlb2->xstats[i].mode !=3D mode) > + continue; > + > + if (mode !=3D RTE_EVENT_DEV_XSTATS_DEVICE && > + queue_port_id !=3D dlb2->xstats[i].obj_idx) > + continue; > + > + xstats_names[xidx] =3D dlb2->xstats[i].name; > + if (ids) > + ids[xidx] =3D start_offset + xidx; > + xidx++; > + } > + return xidx; > +} > + > +static int > +dlb2_xstats_update(struct dlb2_eventdev *dlb2, > + enum rte_event_dev_xstats_mode mode, > + uint8_t queue_port_id, const unsigned int ids[], > + uint64_t values[], unsigned int n, const uint32_t reset) { > + unsigned int i; > + unsigned int xidx =3D 0; > + > + RTE_SET_USED(mode); > + RTE_SET_USED(queue_port_id); > + > + uint32_t xstats_mode_count =3D 0; > + > + switch (mode) { > + case RTE_EVENT_DEV_XSTATS_DEVICE: > + xstats_mode_count =3D dlb2->xstats_count_mode_dev; > + break; > + case RTE_EVENT_DEV_XSTATS_PORT: > + if (queue_port_id >=3D DLB2_MAX_NUM_PORTS) > + goto invalid_value; > + xstats_mode_count =3D dlb2- > >xstats_count_per_port[queue_port_id]; > + break; > + case RTE_EVENT_DEV_XSTATS_QUEUE: > +#if (DLB2_MAX_NUM_QUEUES <=3D 255) /* max 8 bit value */ > + if (queue_port_id >=3D DLB2_MAX_NUM_QUEUES) > + goto invalid_value; > +#endif > + xstats_mode_count =3D dlb2- > >xstats_count_per_qid[queue_port_id]; > + break; > + default: > + goto invalid_value; > + }; > + > + for (i =3D 0; i < n && xidx < xstats_mode_count; i++) { > + struct dlb2_xstats_entry *xs =3D &dlb2->xstats[ids[i]]; > + dlb2_xstats_fn fn; > + > + if (ids[i] > dlb2->xstats_count || xs->mode !=3D mode) > + continue; > + > + if (mode !=3D RTE_EVENT_DEV_XSTATS_DEVICE && > + queue_port_id !=3D xs->obj_idx) > + continue; > + > + switch (xs->fn_id) { > + case DLB2_XSTATS_FN_DEV: > + fn =3D get_dev_stat; > + break; > + case DLB2_XSTATS_FN_PORT: > + fn =3D get_port_stat; > + break; > + case DLB2_XSTATS_FN_QUEUE: > + fn =3D get_queue_stat; > + break; > + default: > + DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", xs- > >fn_id); > + goto invalid_value; > + } > + > + uint64_t val =3D fn(dlb2, xs->obj_idx, xs->stat, > + xs->extra_arg) - xs->reset_value; > + > + if (values) > + values[xidx] =3D val; > + > + if (xs->reset_allowed && reset) > + xs->reset_value +=3D val; > + > + xidx++; > + } > + > + return xidx; > + > +invalid_value: > + return -EINVAL; > +} > + > +int > +dlb2_eventdev_xstats_get(const struct rte_eventdev *dev, > + enum rte_event_dev_xstats_mode mode, uint8_t > queue_port_id, > + const unsigned int ids[], uint64_t values[], unsigned int n) { > + struct dlb2_eventdev *dlb2 =3D dlb2_pmd_priv(dev); > + const uint32_t reset =3D 0; > + > + return dlb2_xstats_update(dlb2, mode, queue_port_id, ids, values, n, > + reset); > +} > + > +uint64_t > +dlb2_eventdev_xstats_get_by_name(const struct rte_eventdev *dev, > + const char *name, unsigned int *id) { > + struct dlb2_eventdev *dlb2 =3D dlb2_pmd_priv(dev); > + unsigned int i; > + dlb2_xstats_fn fn; > + > + for (i =3D 0; i < dlb2->xstats_count; i++) { > + struct dlb2_xstats_entry *xs =3D &dlb2->xstats[i]; > + > + if (strncmp(xs->name.name, name, > + RTE_EVENT_DEV_XSTATS_NAME_SIZE) =3D=3D 0){ > + if (id !=3D NULL) > + *id =3D i; > + > + switch (xs->fn_id) { > + case DLB2_XSTATS_FN_DEV: > + fn =3D get_dev_stat; > + break; > + case DLB2_XSTATS_FN_PORT: > + fn =3D get_port_stat; > + break; > + case DLB2_XSTATS_FN_QUEUE: > + fn =3D get_queue_stat; > + break; > + default: > + DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", > + xs->fn_id); > + return (uint64_t)-1; > + } > + > + return fn(dlb2, xs->obj_idx, xs->stat, > + xs->extra_arg) - xs->reset_value; > + } > + } > + if (id !=3D NULL) > + *id =3D (uint32_t)-1; > + return (uint64_t)-1; > +} > + > +static void > +dlb2_xstats_reset_range(struct dlb2_eventdev *dlb2, uint32_t start, > + uint32_t num) > +{ > + uint32_t i; > + dlb2_xstats_fn fn; > + > + for (i =3D start; i < start + num; i++) { > + struct dlb2_xstats_entry *xs =3D &dlb2->xstats[i]; > + > + if (!xs->reset_allowed) > + continue; > + > + switch (xs->fn_id) { > + case DLB2_XSTATS_FN_DEV: > + fn =3D get_dev_stat; > + break; > + case DLB2_XSTATS_FN_PORT: > + fn =3D get_port_stat; > + break; > + case DLB2_XSTATS_FN_QUEUE: > + fn =3D get_queue_stat; > + break; > + default: > + DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", xs- > >fn_id); > + return; > + } > + > + uint64_t val =3D fn(dlb2, xs->obj_idx, xs->stat, xs->extra_arg); > + xs->reset_value =3D val; > + } > +} > + > +static int > +dlb2_xstats_reset_queue(struct dlb2_eventdev *dlb2, uint8_t queue_id, > + const uint32_t ids[], uint32_t nb_ids) { > + const uint32_t reset =3D 1; > + > + if (ids) { > + uint32_t nb_reset =3D dlb2_xstats_update(dlb2, > + RTE_EVENT_DEV_XSTATS_QUEUE, > + queue_id, ids, NULL, nb_ids, > + reset); > + return nb_reset =3D=3D nb_ids ? 0 : -EINVAL; > + } > + > + if (ids =3D=3D NULL) > + dlb2_xstats_reset_range(dlb2, > + dlb2->xstats_offset_for_qid[queue_id], > + dlb2->xstats_count_per_qid[queue_id]); > + > + return 0; > +} > + > +static int > +dlb2_xstats_reset_port(struct dlb2_eventdev *dlb2, uint8_t port_id, > + const uint32_t ids[], uint32_t nb_ids) { > + const uint32_t reset =3D 1; > + int offset =3D dlb2->xstats_offset_for_port[port_id]; > + int nb_stat =3D dlb2->xstats_count_per_port[port_id]; > + > + if (ids) { > + uint32_t nb_reset =3D dlb2_xstats_update(dlb2, > + RTE_EVENT_DEV_XSTATS_PORT, > port_id, > + ids, NULL, nb_ids, > + reset); > + return nb_reset =3D=3D nb_ids ? 0 : -EINVAL; > + } > + > + dlb2_xstats_reset_range(dlb2, offset, nb_stat); > + return 0; > +} > + > +static int > +dlb2_xstats_reset_dev(struct dlb2_eventdev *dlb2, const uint32_t ids[], > + uint32_t nb_ids) > +{ > + uint32_t i; > + > + if (ids) { > + for (i =3D 0; i < nb_ids; i++) { > + uint32_t id =3D ids[i]; > + > + if (id >=3D dlb2->xstats_count_mode_dev) > + return -EINVAL; > + dlb2_xstats_reset_range(dlb2, id, 1); > + } > + } else { > + for (i =3D 0; i < dlb2->xstats_count_mode_dev; i++) > + dlb2_xstats_reset_range(dlb2, i, 1); > + } > + > + return 0; > +} > + > +int > +dlb2_eventdev_xstats_reset(struct rte_eventdev *dev, > + enum rte_event_dev_xstats_mode mode, > + int16_t queue_port_id, > + const uint32_t ids[], > + uint32_t nb_ids) > +{ > + struct dlb2_eventdev *dlb2 =3D dlb2_pmd_priv(dev); > + uint32_t i; > + > + /* handle -1 for queue_port_id here, looping over all ports/queues */ > + switch (mode) { > + case RTE_EVENT_DEV_XSTATS_DEVICE: > + if (dlb2_xstats_reset_dev(dlb2, ids, nb_ids)) > + return -EINVAL; > + break; > + case RTE_EVENT_DEV_XSTATS_PORT: > + if (queue_port_id =3D=3D -1) { > + for (i =3D 0; i < DLB2_MAX_NUM_PORTS; i++) { > + if (dlb2_xstats_reset_port(dlb2, i, > + ids, nb_ids)) > + return -EINVAL; > + } > + } else if (queue_port_id < DLB2_MAX_NUM_PORTS) { > + if (dlb2_xstats_reset_port(dlb2, queue_port_id, > + ids, nb_ids)) > + return -EINVAL; > + } > + break; > + case RTE_EVENT_DEV_XSTATS_QUEUE: > + if (queue_port_id =3D=3D -1) { > + for (i =3D 0; i < DLB2_MAX_NUM_QUEUES; i++) { > + if (dlb2_xstats_reset_queue(dlb2, i, > + ids, nb_ids)) > + return -EINVAL; > + } > + } else if (queue_port_id < DLB2_MAX_NUM_QUEUES) { > + if (dlb2_xstats_reset_queue(dlb2, queue_port_id, > + ids, nb_ids)) > + return -EINVAL; > + } > + break; > + }; > + > + return 0; > +} > + > +void > +dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f) { > + struct dlb2_eventdev *dlb2; > + struct dlb2_hw_dev *handle; > + int i; > + > + if (!f) { > + printf("Invalid file pointer\n"); > + return; > + } > + > + if (!dev) { > + fprintf(f, "Invalid event device\n"); > + return; > + } > + > + dlb2 =3D dlb2_pmd_priv(dev); > + > + if (!dlb2) { > + fprintf(f, "DLB2 Event device cannot be dumped!\n"); > + return; > + } > + > + if (!dlb2->configured) > + fprintf(f, "DLB2 Event device is not configured\n"); > + > + handle =3D &dlb2->qm_instance; > + > + fprintf(f, "=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n"); > + fprintf(f, "DLB2 Device Dump\n"); > + fprintf(f, "=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n"); > + > + fprintf(f, "Processor supports umonitor/umwait instructions =3D %s\n", > + dlb2->umwait_allowed ? "yes" : "no"); > + > + /* Generic top level device information */ > + > + fprintf(f, "device is configured and run state =3D"); > + if (dlb2->run_state =3D=3D DLB2_RUN_STATE_STOPPED) > + fprintf(f, "STOPPED\n"); > + else if (dlb2->run_state =3D=3D DLB2_RUN_STATE_STOPPING) > + fprintf(f, "STOPPING\n"); > + else if (dlb2->run_state =3D=3D DLB2_RUN_STATE_STARTING) > + fprintf(f, "STARTING\n"); > + else if (dlb2->run_state =3D=3D DLB2_RUN_STATE_STARTED) > + fprintf(f, "STARTED\n"); > + else > + fprintf(f, "UNEXPECTED\n"); > + > + fprintf(f, > + "dev ID=3D%d, dom ID=3D%u, name=3D%s, path=3D%s, sock=3D%u, > evdev=3D%p\n", > + handle->device_id, handle->domain_id, handle->device_name, > + handle->device_path, handle->info.socket_id, dlb2->event_dev); > + > + fprintf(f, "num dir ports=3D%u, num dir queues=3D%u\n", > + dlb2->num_dir_ports, dlb2->num_dir_queues); > + > + fprintf(f, "num ldb ports=3D%u, num ldb queues=3D%u\n", > + dlb2->num_ldb_ports, dlb2->num_ldb_queues); > + > + fprintf(f, "num atomic inflights=3D%u, hist list entries=3D%u\n", > + handle->cfg.resources.num_atomic_inflights, > + handle->cfg.resources.num_hist_list_entries); > + > + fprintf(f, "results from most recent hw resource query:\n"); > + > + fprintf(f, "\tnum_sched_domains =3D %u\n", > + dlb2->hw_rsrc_query_results.num_sched_domains); > + > + fprintf(f, "\tnum_ldb_queues =3D %u\n", > + dlb2->hw_rsrc_query_results.num_ldb_queues); > + > + fprintf(f, "\tnum_ldb_ports =3D %u\n", > + dlb2->hw_rsrc_query_results.num_ldb_ports); > + > + fprintf(f, "\tnum_dir_ports =3D %u\n", > + dlb2->hw_rsrc_query_results.num_dir_ports); > + > + fprintf(f, "\tnum_atomic_inflights =3D %u\n", > + dlb2->hw_rsrc_query_results.num_atomic_inflights); > + > + fprintf(f, "\tnum_hist_list_entries =3D %u\n", > + dlb2->hw_rsrc_query_results.num_hist_list_entries); > + > + fprintf(f, "\tmax_contiguous_hist_list_entries =3D %u\n", > + dlb2->hw_rsrc_query_results.max_contiguous_hist_list_entries); > + > + fprintf(f, "\tnum_ldb_credits =3D %u\n", > + dlb2->hw_rsrc_query_results.num_ldb_credits); > + > + fprintf(f, "\tnum_dir_credits =3D %u\n", > + dlb2->hw_rsrc_query_results.num_dir_credits); > + > + /* Port level information */ > + > + for (i =3D 0; i < dlb2->num_ports; i++) { > + struct dlb2_eventdev_port *p =3D &dlb2->ev_ports[i]; > + int j; > + > + if (!p->enq_configured) > + fprintf(f, "Port_%d is not configured\n", i); > + > + fprintf(f, "Port_%d\n", i); > + fprintf(f, "=3D=3D=3D=3D=3D=3D=3D\n"); > + > + fprintf(f, "\tevport_%u is configured, setup done=3D%d\n", > + p->id, p->setup_done); > + > + fprintf(f, "\tconfig state=3D%d, port state=3D%d\n", > + p->qm_port.config_state, p->qm_port.state); > + > + fprintf(f, "\tport is %s\n", > + p->qm_port.is_directed ? "directed" : "load balanced"); > + > + fprintf(f, "\toutstanding releases=3D%u\n", > + p->outstanding_releases); > + > + fprintf(f, "\tinflight max=3D%u, inflight credits=3D%u\n", > + p->inflight_max, p->inflight_credits); > + > + fprintf(f, "\tcredit update quanta=3D%u, implicit release =3D%u\n", > + p->credit_update_quanta, p->implicit_release); > + > + fprintf(f, "\tnum_links=3D%d, queues -> ", p->num_links); > + > + for (j =3D 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++) { > + if (p->link[j].valid) > + fprintf(f, "id=3D%u prio=3D%u ", > + p->link[j].queue_id, > + p->link[j].priority); > + } > + fprintf(f, "\n"); > + > + fprintf(f, "\thardware port id=3D%u\n", p->qm_port.id); > + > + fprintf(f, "\tcached_ldb_credits=3D%u\n", > + p->qm_port.cached_ldb_credits); > + > + fprintf(f, "\tldb_credits =3D %u\n", > + p->qm_port.ldb_credits); > + > + fprintf(f, "\tcached_dir_credits =3D %u\n", > + p->qm_port.cached_dir_credits); > + > + fprintf(f, "\tdir_credits =3D %u\n", > + p->qm_port.dir_credits); > + > + fprintf(f, "\tgenbit=3D%d, cq_idx=3D%d, cq_depth=3D%d\n", > + p->qm_port.gen_bit, > + p->qm_port.cq_idx, > + p->qm_port.cq_depth); > + > + fprintf(f, "\tinterrupt armed=3D%d\n", > + p->qm_port.int_armed); > + > + fprintf(f, "\tPort statistics\n"); > + > + fprintf(f, "\t\trx_ok %" PRIu64 "\n", > + p->stats.traffic.rx_ok); > + > + fprintf(f, "\t\trx_drop %" PRIu64 "\n", > + p->stats.traffic.rx_drop); > + > + fprintf(f, "\t\trx_interrupt_wait %" PRIu64 "\n", > + p->stats.traffic.rx_interrupt_wait); > + > + fprintf(f, "\t\trx_umonitor_umwait %" PRIu64 "\n", > + p->stats.traffic.rx_umonitor_umwait); > + > + fprintf(f, "\t\ttx_ok %" PRIu64 "\n", > + p->stats.traffic.tx_ok); > + > + fprintf(f, "\t\ttotal_polls %" PRIu64 "\n", > + p->stats.traffic.total_polls); > + > + fprintf(f, "\t\tzero_polls %" PRIu64 "\n", > + p->stats.traffic.zero_polls); > + > + fprintf(f, "\t\ttx_nospc_ldb_hw_credits %" PRIu64 "\n", > + p->stats.traffic.tx_nospc_ldb_hw_credits); > + > + fprintf(f, "\t\ttx_nospc_dir_hw_credits %" PRIu64 "\n", > + p->stats.traffic.tx_nospc_dir_hw_credits); > + > + fprintf(f, "\t\ttx_nospc_inflight_max %" PRIu64 "\n", > + p->stats.traffic.tx_nospc_inflight_max); > + > + fprintf(f, "\t\ttx_nospc_new_event_limit %" PRIu64 "\n", > + p->stats.traffic.tx_nospc_new_event_limit); > + > + fprintf(f, "\t\ttx_nospc_inflight_credits %" PRIu64 "\n", > + p->stats.traffic.tx_nospc_inflight_credits); > + > + fprintf(f, "\t\ttx_new %" PRIu64 "\n", > + p->stats.tx_op_cnt[RTE_EVENT_OP_NEW]); > + > + fprintf(f, "\t\ttx_fwd %" PRIu64 "\n", > + p->stats.tx_op_cnt[RTE_EVENT_OP_FORWARD]); > + > + fprintf(f, "\t\ttx_rel %" PRIu64 "\n", > + p->stats.tx_op_cnt[RTE_EVENT_OP_RELEASE]); > + > + fprintf(f, "\t\ttx_implicit_rel %" PRIu64 "\n", > + p->stats.tx_implicit_rel); > + > + fprintf(f, "\t\ttx_sched_ordered %" PRIu64 "\n", > + p->stats.tx_sched_cnt[DLB2_SCHED_ORDERED]); > + > + fprintf(f, "\t\ttx_sched_unordered %" PRIu64 "\n", > + p->stats.tx_sched_cnt[DLB2_SCHED_UNORDERED]); > + > + fprintf(f, "\t\ttx_sched_atomic %" PRIu64 "\n", > + p->stats.tx_sched_cnt[DLB2_SCHED_ATOMIC]); > + > + fprintf(f, "\t\ttx_sched_directed %" PRIu64 "\n", > + p->stats.tx_sched_cnt[DLB2_SCHED_DIRECTED]); > + > + fprintf(f, "\t\ttx_invalid %" PRIu64 "\n", > + p->stats.tx_invalid); > + > + fprintf(f, "\t\trx_sched_ordered %" PRIu64 "\n", > + p->stats.rx_sched_cnt[DLB2_SCHED_ORDERED]); > + > + fprintf(f, "\t\trx_sched_unordered %" PRIu64 "\n", > + p->stats.rx_sched_cnt[DLB2_SCHED_UNORDERED]); > + > + fprintf(f, "\t\trx_sched_atomic %" PRIu64 "\n", > + p->stats.rx_sched_cnt[DLB2_SCHED_ATOMIC]); > + > + fprintf(f, "\t\trx_sched_directed %" PRIu64 "\n", > + p->stats.rx_sched_cnt[DLB2_SCHED_DIRECTED]); > + > + fprintf(f, "\t\trx_sched_invalid %" PRIu64 "\n", > + p->stats.rx_sched_invalid); > + } > + > + /* Queue level information */ > + > + for (i =3D 0; i < dlb2->num_queues; i++) { > + struct dlb2_eventdev_queue *q =3D &dlb2->ev_queues[i]; > + int j, k; > + > + if (!q->setup_done) > + fprintf(f, "Queue_%d is not configured\n", i); > + > + fprintf(f, "Queue_%d\n", i); > + fprintf(f, "=3D=3D=3D=3D=3D=3D=3D=3D\n"); > + > + fprintf(f, "\tevqueue_%u is set up\n", q->id); > + > + fprintf(f, "\tqueue is %s\n", > + q->qm_queue.is_directed ? "directed" : "load > balanced"); > + > + fprintf(f, "\tnum_links=3D%d, ports -> ", q->num_links); > + > + for (j =3D 0; j < dlb2->num_ports; j++) { > + struct dlb2_eventdev_port *p =3D &dlb2->ev_ports[j]; > + > + for (k =3D 0; k < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; > k++) { > + if (p->link[k].valid && > + p->link[k].queue_id =3D=3D q->id) > + fprintf(f, "id=3D%u prio=3D%u ", > + p->id, p->link[k].priority); > + } > + } > + fprintf(f, "\n"); > + > + fprintf(f, "\tcurrent depth: %u events\n", > + dlb2_get_queue_depth(dlb2, q)); > + > + fprintf(f, "\tnum qid inflights=3D%u, sched_type=3D%d\n", > + q->qm_queue.num_qid_inflights, q- > >qm_queue.sched_type); > + } > +} > diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.bu= ild > index 557e3b4..492452e 100644 > --- a/drivers/event/dlb2/meson.build > +++ b/drivers/event/dlb2/meson.build > @@ -3,6 +3,7 @@ >=20 > sources =3D files('dlb2.c', > 'dlb2_iface.c', > + 'dlb2_xstats.c', > 'pf/dlb2_main.c', > 'pf/dlb2_pf.c', > 'pf/base/dlb2_resource.c' > -- > 2.6.4