From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 442B8A0546; Mon, 8 Mar 2021 08:52:23 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F19244068A; Mon, 8 Mar 2021 08:52:22 +0100 (CET) Received: from mail-io1-f43.google.com (mail-io1-f43.google.com [209.85.166.43]) by mails.dpdk.org (Postfix) with ESMTP id EE71340141 for ; Mon, 8 Mar 2021 08:52:21 +0100 (CET) Received: by mail-io1-f43.google.com with SMTP id o11so8985812iob.1 for ; Sun, 07 Mar 2021 23:52:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=n/8o0qEPj/KCKrSIkOBSg2T/o/ss4s0WTcWDNZ3jobA=; b=PZo2LddLE8E25jhpv61HbWo3lyORpReLleA7LdPEKxBdabSaubcZu6bmQNbDm9bATu iWHhzfzZhumTkTNZDmZKPhm8J+arhI0mM3rDqfw9C9VCnICfoRRARIG/TBcP28lx9twF BB2eg3eNBSiuoibjH60K05x1ysRdZyuKHwmuE0kD4GLggopCZKOKn88o+1WNRuXIcoSL Sqt4znJRqv0kzoKaB0vWJV4jPEwyDKDcfJ9Q4d+5nxI5XLDXIa+QF+HvEF+vnDS/s+fU 9HgoVFIKV7diV93AzF/rVcesnE4xTAFNv3jcB2hQbFG8F4/WUDTNjaXTFArmhqVCDAEK cPIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=n/8o0qEPj/KCKrSIkOBSg2T/o/ss4s0WTcWDNZ3jobA=; b=sLXu+R17gGN/Mu/BGvNCE3Ts+SywIZA+k43KJnR+IJCl0D1IY/3WuQUF4MvM5Vy6xR xaN71V0JYGZq1rZauNkCNeTDSZSEzTz57HPz9TJUM/toIIfDF7n5XdIbh4nYYFHNI2sy TipxMIQJpl0TmBROUzPke9RdsyCXnnrhIxwuGdqzxl4qp3FXGhI8pgVzavObxa/BKPOz w90mYhWnzX9ELTB0aQw2S+LaJcXS37Jy+GSZfn84cTHcZYTIXy2lUY8fBiNTb3NaiRse EtKFlTne3pmXCdNvL6V3Tt4ucA5eUanv9M69GyEKYTuxKl92sSLrDL5FJszOScLtE5jM ojqg== X-Gm-Message-State: AOAM530tG6u1jsr8SpQuqP3/44BEf+C8TQrWCdQSspBu0j/5gucFY/1r RxG+BsnZEipkkwgVXyRJPr7mz954srbvQ03tNXY= X-Google-Smtp-Source: ABdhPJy8eyr85o3oHtRIqOUgS/twMCavPMdMVdUBNTUZQ1e2FO7knn+PrY6CWSiQb48mpT70rG8GJLLWwY677hCByKs= X-Received: by 2002:a5d:9641:: with SMTP id d1mr17416426ios.123.1615189941158; Sun, 07 Mar 2021 23:52:21 -0800 (PST) MIME-Version: 1.0 References: <20210212165814.2189305-1-harry.van.haaren@intel.com> <20210303105643.2552378-1-harry.van.haaren@intel.com> In-Reply-To: From: Jerin Jacob Date: Mon, 8 Mar 2021 13:22:04 +0530 Message-ID: To: "Van Haaren, Harry" Cc: "dev@dpdk.org" , "david.marchand@redhat.com" , "mattias.ronnblom" , "jerinj@marvell.com" Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v4] event/sw: add xstats to expose progress details X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Mar 4, 2021 at 4:33 PM Van Haaren, Harry wrote: > > Fix typo in Mattias' email, apologies for noise. > > > -----Original Message----- > > From: Van Haaren, Harry > > Sent: Wednesday, March 3, 2021 10:57 AM > > To: dev@dpdk.org > > Cc: david.marchand@redhat.com; mattias.ronnblom@ericcson.com; > > jerinj@marvell.com; Van Haaren, Harry > > Subject: [PATCH v4] event/sw: add xstats to expose progress details > > > > Today it is difficult to know if the SW Eventdev PMD is making > > forward progress when it runs an iteration of its service. This > > commit adds two xstats to give better visibility to the application. > > > > The new xstats provide an application with which Eventdev ports > > recieved work in the last iteration of scheduling, as well if > > forward progress was made by the scheduler. > > > > This patch implements an xstat for the SW PMD that exposes a > > bitmask of ports that were scheduled to. In the unlikely case > > that the SW PMD instance has 64 or more ports, return UINT64_MAX. > > > > Signed-off-by: Harry van Haaren Please fix the following checkpatch issue as needed. ### event/sw: add xstats to expose progress details WARNING:TYPO_SPELLING: 'recieved' may be misspelled - perhaps 'received'? #11: recieved work in the last iteration of scheduling, as well if ^^^^^^^^ WARNING:BRACES: braces {} are not necessary for any arm of this statement #65: FILE: drivers/event/sw/sw_evdev_scheduler.c:610: + if (likely(sw->port_count < 64)) { [...] + } else { [...] total: 0 errors, 2 warnings, 153 lines checked > > > > --- > > > > v3: > > - Simplify all metrics to Event SW PMD > > > > v2: > > - Fixup printf() %ld to PRIu64 > > > > Note most of the changes here are unit-test changes to add > > a statistic to the PMD. The actual "useful code" is a mere > > handful of lines in a lot of noise. > > > > --- > > drivers/event/sw/sw_evdev.h | 2 ++ > > drivers/event/sw/sw_evdev_scheduler.c | 15 ++++++++++++++ > > drivers/event/sw/sw_evdev_selftest.c | 28 ++++++++++++++------------- > > drivers/event/sw/sw_evdev_xstats.c | 9 ++++++++- > > 4 files changed, 40 insertions(+), 14 deletions(-) > > > > diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h > > index 5ab6465c83..33645bd1df 100644 > > --- a/drivers/event/sw/sw_evdev.h > > +++ b/drivers/event/sw/sw_evdev.h > > @@ -259,6 +259,8 @@ struct sw_evdev { > > uint64_t sched_no_iq_enqueues; > > uint64_t sched_no_cq_enqueues; > > uint64_t sched_cq_qid_called; > > + uint64_t sched_last_iter_bitmask; > > + uint8_t sched_progress_last_iter; > > > > uint8_t started; > > uint32_t credit_update_quanta; > > diff --git a/drivers/event/sw/sw_evdev_scheduler.c > > b/drivers/event/sw/sw_evdev_scheduler.c > > index f747b3c6d4..d3a6bd5cda 100644 > > --- a/drivers/event/sw/sw_evdev_scheduler.c > > +++ b/drivers/event/sw/sw_evdev_scheduler.c > > @@ -559,6 +559,11 @@ sw_event_schedule(struct rte_eventdev *dev) > > sw->sched_no_iq_enqueues += (in_pkts_total == 0); > > sw->sched_no_cq_enqueues += (out_pkts_total == 0); > > > > + uint64_t work_done = (in_pkts_total + out_pkts_total) != 0; > > + sw->sched_progress_last_iter = work_done; > > + > > + uint64_t cqs_scheds_last_iter = 0; > > + > > /* push all the internal buffered QEs in port->cq_ring to the > > * worker cores: aka, do the ring transfers batched. > > */ > > @@ -578,6 +583,7 @@ sw_event_schedule(struct rte_eventdev *dev) > > &sw->cq_ring_space[i]); > > port->cq_buf_count = 0; > > no_enq = 0; > > + cqs_scheds_last_iter |= (1ULL << i); > > } else { > > sw->cq_ring_space[i] = > > rte_event_ring_free_count(worker) - > > @@ -597,4 +603,13 @@ sw_event_schedule(struct rte_eventdev *dev) > > sw->sched_min_burst = sw->sched_min_burst_size; > > } > > > > + /* Provide stats on what eventdev ports were scheduled to this > > + * iteration. If more than 64 ports are active, always report that > > + * all Eventdev ports have been scheduled events. > > + */ > > + if (likely(sw->port_count < 64)) { > > + sw->sched_last_iter_bitmask = cqs_scheds_last_iter; > > + } else { > > + sw->sched_last_iter_bitmask = UINT64_MAX; > > + } > > } > > diff --git a/drivers/event/sw/sw_evdev_selftest.c > > b/drivers/event/sw/sw_evdev_selftest.c > > index e4bfb3a0f1..d53e903129 100644 > > --- a/drivers/event/sw/sw_evdev_selftest.c > > +++ b/drivers/event/sw/sw_evdev_selftest.c > > @@ -873,15 +873,15 @@ xstats_tests(struct test *t) > > int ret = rte_event_dev_xstats_names_get(evdev, > > RTE_EVENT_DEV_XSTATS_DEVICE, > > 0, xstats_names, ids, XSTATS_MAX); > > - if (ret != 6) { > > - printf("%d: expected 6 stats, got return %d\n", __LINE__, ret); > > + if (ret != 8) { > > + printf("%d: expected 8 stats, got return %d\n", __LINE__, ret); > > return -1; > > } > > ret = rte_event_dev_xstats_get(evdev, > > RTE_EVENT_DEV_XSTATS_DEVICE, > > 0, ids, values, ret); > > - if (ret != 6) { > > - printf("%d: expected 6 stats, got return %d\n", __LINE__, ret); > > + if (ret != 8) { > > + printf("%d: expected 8 stats, got return %d\n", __LINE__, ret); > > return -1; > > } > > > > @@ -959,7 +959,7 @@ xstats_tests(struct test *t) > > ret = rte_event_dev_xstats_get(evdev, > > RTE_EVENT_DEV_XSTATS_DEVICE, > > 0, ids, values, num_stats); > > - static const uint64_t expected[] = {3, 3, 0, 1, 0, 0}; > > + static const uint64_t expected[] = {3, 3, 0, 1, 0, 0, 4, 1}; > > for (i = 0; (signed int)i < ret; i++) { > > if (expected[i] != values[i]) { > > printf( > > @@ -975,7 +975,7 @@ xstats_tests(struct test *t) > > 0, NULL, 0); > > > > /* ensure reset statistics are zero-ed */ > > - static const uint64_t expected_zero[] = {0, 0, 0, 0, 0, 0}; > > + static const uint64_t expected_zero[] = {0, 0, 0, 0, 0, 0, 0, 0}; > > ret = rte_event_dev_xstats_get(evdev, > > RTE_EVENT_DEV_XSTATS_DEVICE, > > 0, ids, values, num_stats); > > @@ -1460,7 +1460,7 @@ xstats_id_reset_tests(struct test *t) > > for (i = 0; i < XSTATS_MAX; i++) > > ids[i] = i; > > > > -#define NUM_DEV_STATS 6 > > +#define NUM_DEV_STATS 8 > > /* Device names / values */ > > int num_stats = rte_event_dev_xstats_names_get(evdev, > > RTE_EVENT_DEV_XSTATS_DEVICE, > > @@ -1504,8 +1504,10 @@ xstats_id_reset_tests(struct test *t) > > static const char * const dev_names[] = { > > "dev_rx", "dev_tx", "dev_drop", "dev_sched_calls", > > "dev_sched_no_iq_enq", "dev_sched_no_cq_enq", > > + "dev_sched_last_iter_bitmask", > > + "dev_sched_progress_last_iter" > > }; > > - uint64_t dev_expected[] = {NPKTS, NPKTS, 0, 1, 0, 0}; > > + uint64_t dev_expected[] = {NPKTS, NPKTS, 0, 1, 0, 0, 4, 1}; > > for (i = 0; (int)i < ret; i++) { > > unsigned int id; > > uint64_t val = rte_event_dev_xstats_by_name_get(evdev, > > @@ -1518,8 +1520,8 @@ xstats_id_reset_tests(struct test *t) > > } > > if (val != dev_expected[i]) { > > printf("%d: %s value incorrect, expected %" > > - PRIu64" got %d\n", __LINE__, dev_names[i], > > - dev_expected[i], id); > > + PRIu64" got %"PRIu64"\n", __LINE__, > > + dev_names[i], dev_expected[i], val); > > goto fail; > > } > > /* reset to zero */ > > @@ -1542,11 +1544,11 @@ xstats_id_reset_tests(struct test *t) > > } > > }; > > > > -/* 48 is stat offset from start of the devices whole xstats. > > +/* 49 is stat offset from start of the devices whole xstats. > > * This WILL break every time we add a statistic to a port > > * or the device, but there is no other way to test > > */ > > -#define PORT_OFF 48 > > +#define PORT_OFF 50 > > /* num stats for the tested port. CQ size adds more stats to a port */ > > #define NUM_PORT_STATS 21 > > /* the port to test. */ > > @@ -1670,7 +1672,7 @@ xstats_id_reset_tests(struct test *t) > > /* queue offset from start of the devices whole xstats. > > * This will break every time we add a statistic to a device/port/queue > > */ > > -#define QUEUE_OFF 90 > > +#define QUEUE_OFF 92 > > const uint32_t queue = 0; > > num_stats = rte_event_dev_xstats_names_get(evdev, > > RTE_EVENT_DEV_XSTATS_QUEUE, queue, > > diff --git a/drivers/event/sw/sw_evdev_xstats.c > > b/drivers/event/sw/sw_evdev_xstats.c > > index 02f7874180..c2647d7da2 100644 > > --- a/drivers/event/sw/sw_evdev_xstats.c > > +++ b/drivers/event/sw/sw_evdev_xstats.c > > @@ -17,6 +17,8 @@ enum xstats_type { > > /* device instance specific */ > > no_iq_enq, > > no_cq_enq, > > + sched_last_iter_bitmask, > > + sched_progress_last_iter, > > /* port_specific */ > > rx_used, > > rx_free, > > @@ -57,6 +59,9 @@ get_dev_stat(const struct sw_evdev *sw, uint16_t obj_idx > > __rte_unused, > > case calls: return sw->sched_called; > > case no_iq_enq: return sw->sched_no_iq_enqueues; > > case no_cq_enq: return sw->sched_no_cq_enqueues; > > + case sched_last_iter_bitmask: return sw->sched_last_iter_bitmask; > > + case sched_progress_last_iter: return sw->sched_progress_last_iter; > > + > > default: return -1; > > } > > } > > @@ -177,9 +182,11 @@ sw_xstats_init(struct sw_evdev *sw) > > */ > > static const char * const dev_stats[] = { "rx", "tx", "drop", > > "sched_calls", "sched_no_iq_enq", "sched_no_cq_enq", > > + "sched_last_iter_bitmask", "sched_progress_last_iter", > > }; > > static const enum xstats_type dev_types[] = { rx, tx, dropped, > > - calls, no_iq_enq, no_cq_enq, > > + calls, no_iq_enq, no_cq_enq, sched_last_iter_bitmask, > > + sched_progress_last_iter, > > }; > > /* all device stats are allowed to be reset */ > > > > -- > > 2.25.1 >