DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Singh, Jasvinder" <jasvinder.singh@intel.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2 2/2] qos: rearrange enqueue procedure
Date: Fri, 2 Apr 2021 14:14:34 +0000	[thread overview]
Message-ID: <CY4PR1101MB21343FFE2B581CF14E940CD8E07A9@CY4PR1101MB2134.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210318115613.5503-2-konstantin.ananyev@intel.com>



> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Thursday, March 18, 2021 11:56 AM
> To: dev@dpdk.org
> Cc: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Subject: [PATCH v2 2/2] qos: rearrange enqueue procedure
> 
> In many usage scenarios input mbufs for rte_sched_port_enqueue() are not
> yet in the CPU cache(s). That causes quite significant stalls due to memory
> latency. Current implementation tries to migitate it using SW pipeline and SW
> prefetch techniques, but stalls are still present.
> Rework rte_sched_port_enqueue() to do actual fetch of all mbufs metadata
> as a first stage of that function.
> That helps to minimise load stalls at further stages of enqueue() and
> improves overall enqueue performance.
> With examples/qos_sched I observed:
> on ICX box: up to 30% cycles reduction
> on CSX AND BDX: 20-15% cycles reduction
> I also run tests with mbufs already in the cache (one core doing RX, QOS and
> TX).
> With such scenario, on all mentioned above IA boxes no performance drop
> was observed.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> v2: fix clang and checkpatch complains
> ---
>  lib/librte_sched/rte_sched.c | 219 +++++------------------------------
>  1 file changed, 31 insertions(+), 188 deletions(-)
> 
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c index
> 7c5688068..41ef147e0 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -1861,24 +1861,23 @@ debug_check_queue_slab(struct
> rte_sched_subport *subport, uint32_t bmp_pos,  #endif /*
> RTE_SCHED_DEBUG */
> 
>  static inline struct rte_sched_subport * -rte_sched_port_subport(struct
> rte_sched_port *port,
> -	struct rte_mbuf *pkt)
> +sched_port_subport(const struct rte_sched_port *port, struct
> +rte_mbuf_sched sch)
>  {
> -	uint32_t queue_id = rte_mbuf_sched_queue_get(pkt);
> +	uint32_t queue_id = sch.queue_id;
>  	uint32_t subport_id = queue_id >> (port-
> >n_pipes_per_subport_log2 + 4);
> 
>  	return port->subports[subport_id];
>  }
> 
>  static inline uint32_t
> -rte_sched_port_enqueue_qptrs_prefetch0(struct rte_sched_subport
> *subport,
> -	struct rte_mbuf *pkt, uint32_t subport_qmask)
> +sched_port_enqueue_qptrs_prefetch0(const struct rte_sched_subport
> *subport,
> +	struct rte_mbuf_sched sch, uint32_t subport_qmask)
>  {
>  	struct rte_sched_queue *q;
>  #ifdef RTE_SCHED_COLLECT_STATS
>  	struct rte_sched_queue_extra *qe;
>  #endif
> -	uint32_t qindex = rte_mbuf_sched_queue_get(pkt);
> +	uint32_t qindex = sch.queue_id;
>  	uint32_t subport_queue_id = subport_qmask & qindex;
> 
>  	q = subport->queue + subport_queue_id; @@ -1971,197 +1970,41
> @@ int  rte_sched_port_enqueue(struct rte_sched_port *port, struct
> rte_mbuf **pkts,
>  		       uint32_t n_pkts)
>  {
> -	struct rte_mbuf *pkt00, *pkt01, *pkt10, *pkt11, *pkt20, *pkt21,
> -		*pkt30, *pkt31, *pkt_last;
> -	struct rte_mbuf **q00_base, **q01_base, **q10_base,
> **q11_base,
> -		**q20_base, **q21_base, **q30_base, **q31_base,
> **q_last_base;
> -	struct rte_sched_subport *subport00, *subport01, *subport10,
> *subport11,
> -		*subport20, *subport21, *subport30, *subport31,
> *subport_last;
> -	uint32_t q00, q01, q10, q11, q20, q21, q30, q31, q_last;
> -	uint32_t r00, r01, r10, r11, r20, r21, r30, r31, r_last;
> -	uint32_t subport_qmask;
>  	uint32_t result, i;
> +	struct rte_mbuf_sched sch[n_pkts];
> +	struct rte_sched_subport *subports[n_pkts];
> +	struct rte_mbuf **q_base[n_pkts];
> +	uint32_t q[n_pkts];
> +
> +	const uint32_t subport_qmask =
> +		(1 << (port->n_pipes_per_subport_log2 + 4)) - 1;
> 
>  	result = 0;
> -	subport_qmask = (1 << (port->n_pipes_per_subport_log2 + 4)) - 1;
> 
> -	/*
> -	 * Less then 6 input packets available, which is not enough to
> -	 * feed the pipeline
> -	 */
> -	if (unlikely(n_pkts < 6)) {
> -		struct rte_sched_subport *subports[5];
> -		struct rte_mbuf **q_base[5];
> -		uint32_t q[5];
> -
> -		/* Prefetch the mbuf structure of each packet */
> -		for (i = 0; i < n_pkts; i++)
> -			rte_prefetch0(pkts[i]);
> -
> -		/* Prefetch the subport structure for each packet */
> -		for (i = 0; i < n_pkts; i++)
> -			subports[i] = rte_sched_port_subport(port, pkts[i]);
> -
> -		/* Prefetch the queue structure for each queue */
> -		for (i = 0; i < n_pkts; i++)
> -			q[i] =
> rte_sched_port_enqueue_qptrs_prefetch0(subports[i],
> -					pkts[i], subport_qmask);
> -
> -		/* Prefetch the write pointer location of each queue */
> -		for (i = 0; i < n_pkts; i++) {
> -			q_base[i] =
> rte_sched_subport_pipe_qbase(subports[i], q[i]);
> -			rte_sched_port_enqueue_qwa_prefetch0(port,
> subports[i],
> -				q[i], q_base[i]);
> -		}
> +	/* Prefetch the mbuf structure of each packet */
> +	for (i = 0; i < n_pkts; i++)
> +		sch[i] = pkts[i]->hash.sched;
> 

Hi Konstantin,  thanks for the patch. In above case, all packets are touched straight with any prefetch. If we consider the input burst size of 64 pkts, it means 512 bytes of packet addresses  (8 cache-lines) which is likely to be available in cache. For larger size burst, e.g. 128 or 256, there might be instances when some addresses are not available the cache, may stall core. How about adding explicit prefetch before starting to iterate through the packets if that helps?       

  reply	other threads:[~2021-04-02 14:14 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-16 17:07 [dpdk-dev] [PATCH 1/2] examples/qos_sched: fixup colors value overrun Konstantin Ananyev
2021-03-16 17:07 ` [dpdk-dev] [PATCH 2/2] qos: rearrange enqueue procedure Konstantin Ananyev
2021-03-18 11:56 ` [dpdk-dev] [PATCH v2 1/2] examples/qos_sched: fixup colors value overrun Konstantin Ananyev
2021-03-18 11:56   ` [dpdk-dev] [PATCH v2 2/2] qos: rearrange enqueue procedure Konstantin Ananyev
2021-04-02 14:14     ` Singh, Jasvinder [this message]
2021-04-02 21:12       ` Dumitrescu, Cristian
2021-04-03 23:53         ` Ananyev, Konstantin
2021-11-17 10:33           ` Thomas Monjalon
2021-11-17 10:53             ` Dumitrescu, Cristian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CY4PR1101MB21343FFE2B581CF14E940CD8E07A9@CY4PR1101MB2134.namprd11.prod.outlook.com \
    --to=jasvinder.singh@intel.com \
    --cc=cristian.dumitrescu@intel.com \
    --cc=dev@dpdk.org \
    --cc=konstantin.ananyev@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).