DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: Suanming Mou <suanmingm@nvidia.com>, Ori Kam <orika@mellanox.com>,
	Matan Azrad <matan@mellanox.com>,
	Shahaf Shuler <shahafs@mellanox.com>,
	Viacheslav Ovsiienko <viacheslavo@mellanox.com>,
	Ferruh Yigit <ferruh.yigit@intel.com>,
	Andrew Rybchenko <arybchenko@solarflare.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	joyce.kong@arm.com, phil.yang@arm.com, steve.capper@arm.com,
	honnappa.nagarahalli@arm.com
Subject: Re: [dpdk-dev] [RFC] ethdev: make rte flow API thread safe
Date: Tue, 8 Sep 2020 09:02:48 -0700	[thread overview]
Message-ID: <20200908090248.7a78888a@hermes.lan> (raw)
In-Reply-To: <3402903.UWDKPdykxe@thomas>

On Tue, 08 Sep 2020 17:03:53 +0200
Thomas Monjalon <thomas@monjalon.net> wrote:

> 08/09/2020 16:52, Stephen Hemminger:
> > On Mon, 7 Sep 2020 02:36:48 +0000
> > Suanming Mou <suanmingm@nvidia.com> wrote:  
> > > > What is the performance impact of this for currently working applications that
> > > > use a single thread to program flow rules.  You are adding a couple of system
> > > > calls to what was formerly a totally usermode operation.    
> > 
> > Read the source for glibc and see what pthread_mutex does  
> 
> What would be the best lock for rte_flow?
> We have spin lock, ticket lock, MCS lock (and rwlock) in DPDK.

The tradeoff is between speed, correctness, and simplicity.
The flow case is performance sensitive (for connection per second tests);
but not super critical (ie every packet).
Fastest would be RCU but probably not necessary here.

There would rarely be contention on this (thread safety is new), and there
is no case where reader makes sense. For hardware, programming flow rules
would basic interaction with TCAM (ie fast). For software drivers, typically
making flow rule requires system call to set classifier etc. Holding spin
lock across system calls leads to preemption and other issues.

Would it be possible to push the choice of mutual exclusion down to
the device driver? For fast HW devices they could use spinlock and for
slow SW devices it would be pthread.


  reply	other threads:[~2020-09-08 16:03 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-03  4:53 Suanming Mou
2020-09-03 17:37 ` Stephen Hemminger
2020-09-07  2:36   ` Suanming Mou
2020-09-08 14:52     ` Stephen Hemminger
2020-09-08 15:03       ` Thomas Monjalon
2020-09-08 16:02         ` Stephen Hemminger [this message]
2020-09-09  2:26           ` Suanming Mou
2020-09-24  1:42             ` Suanming Mou
2020-09-09  1:26       ` Suanming Mou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200908090248.7a78888a@hermes.lan \
    --to=stephen@networkplumber.org \
    --cc=arybchenko@solarflare.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=joyce.kong@arm.com \
    --cc=matan@mellanox.com \
    --cc=orika@mellanox.com \
    --cc=phil.yang@arm.com \
    --cc=shahafs@mellanox.com \
    --cc=steve.capper@arm.com \
    --cc=suanmingm@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).