DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Victor Kaplansky <vkaplans@redhat.com>
Cc: dev@dpdk.org, stable@dpdk.org,
	Jens Freimann <jfreiman@redhat.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>,
	Yuanhan Liu <yliu@fridaylinux.org>,
	Tiwei Bie <tiwei.bie@intel.com>,
	Jianfeng Tan <jianfeng.tan@intel.com>
Subject: Re: [dpdk-dev] [PATCH v4] vhost_user: protect active rings from async ring changes
Date: Wed, 20 Dec 2017 12:19:45 -0800	[thread overview]
Message-ID: <20171220121945.0143b0af@xeon-e3> (raw)
In-Reply-To: <634157847.2119460.1513800390896.JavaMail.zimbra@redhat.com>

On Wed, 20 Dec 2017 15:06:30 -0500 (EST)
Victor Kaplansky <vkaplans@redhat.com> wrote:

> > Wrapping locking inline's adds nothing and makes life harder
> > for static analysis tools.  
> 
> Yep. In this case it inhibits the details of how the locking is
> implemented (e.g. the name of the lock). It also facilitates
> replacement of locking mechanism, by another implementation.
> See below.

YAGNI You aren't gonna need it.

Don't build infrastructure for things that you forsee.




> > 
> > The bigger problem is that doing locking on all enqueue/dequeue
> > can have a visible performance impact. Did you measure that?
> > 
> > Could you invent an RCUish mechanism using compiler barriers?
> >   
> 
> I've played a bit with measuring performance impact. Successful
> lock adds on the average about 30 cycles on my Haswell cpu.
> (and it successes 99.999...% of time).
> 
> I can investigate it more, but my initial feeling is that adding a
> memory barrier (the real one, not the compiler barrier) would add
> about the same overhead.
> 
> By the way, the way update_queuing_status() in 
> drivers/net/vhost/rte_eth_vhost.c tries to avoid contention with
> the active queue by playing with "allow_queuing" and "while_queuing"
> seems to be broken, since memory barriers are missing.

CPU cycles alone don't matter on modern x86.
What matters is cache and instructions per cycle.
In this case locking requires locked instruction which causes the cpu
prefetching and instruction pipeline to stall.

  reply	other threads:[~2017-12-20 20:19 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-20 14:37 Victor Kaplansky
2017-12-20 19:06 ` Stephen Hemminger
2017-12-20 20:06   ` Victor Kaplansky
2017-12-20 20:19     ` Stephen Hemminger [this message]
2017-12-21 12:41       ` Victor Kaplansky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171220121945.0143b0af@xeon-e3 \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    --cc=jfreiman@redhat.com \
    --cc=jianfeng.tan@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    --cc=tiwei.bie@intel.com \
    --cc=vkaplans@redhat.com \
    --cc=yliu@fridaylinux.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).