DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: Olivier MATZ <olivier.matz@6wind.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore
Date: Thu, 18 Dec 2014 16:04:02 +0000	[thread overview]
Message-ID: <20141218160400.GB2460@bricha3-MOBL3> (raw)
In-Reply-To: <5492EE90.4040502@6wind.com>

On Thu, Dec 18, 2014 at 04:11:12PM +0100, Olivier MATZ wrote:
> Hi,
> 
> On 12/18/2014 03:32 PM, Bruce Richardson wrote:
> > On Thu, Dec 18, 2014 at 12:20:07PM +0000, Walukiewicz, Miroslaw wrote:
> >> I have another question regarding your patch.
> >>
> >>  Could we extend values returned by rte_lcore_id() to set them per thread (really the DPDK lcore is a pthread but started on specific core) instead of creating linear thread id. 
> >>
> >> The patch would be much simpler and will work same way. The only change would be extending rte_lcore_id when rte_pthread_create() is called. 
> >>
> >> The value __lcore_id has really an attribute __thread that means it is valid not only per CPU core but also per thread.
> >>
> >> The mempools, timers, statistics would work without any modifications in that environment.
> >>
> >>  I do not see any reason why old legacy DPDK applications would not work in that model. 
> >>
> >> Mirek
> > 
> > Definite +1 here. 
> 
> One remark though: it looks that the rte_rings (and therefore the
> rte_mempools) are designed with the assumption that the execution
> units are alone on their cores.
> 
> As explained in [1], there is a risk that a pthread is interrupted
> by the kernel at a bad moment. Therefore another thread can be
> blocked, spinning on a variable to change its value.
> 
> The same could also occurs with spinlocks which are not designed
> to wakeup another pthread when the lock is held (like pthread_locks).
> 
> And finally, having several pthreads per core implies that the
> application should be designed with large queues: if a pthread is
> not scheduled during 10ms, it represents 100K packets at 10M PPS.
> 
> I don't say it's impossible to do it, but I think it's not so
> simple :)
> 
> Regards,
> Olivier
> 
> [1] http://dpdk.org/ml/archives/dev/2013-November/000714.html

Yes, completely agree, but because there are so many potential pitfalls, I
think we should take small steps without making major changes. Hence my agreement
with Mirek's suggestion as the simplest way forward that gives us good possibilities
and flexibility without major work.

BTW: For the multi-thread per core case, I think we'll need to look at threads
making extensive use of yields to help force the context switches at proper times.

/Bruce

  reply	other threads:[~2014-12-18 16:10 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-11  2:04 Cunming Liang
2014-12-11  2:04 ` [dpdk-dev] [RFC PATCH 1/7] eal: add linear thread id as pthread-local variable Cunming Liang
2014-12-16  7:00   ` Qiu, Michael
2014-12-22 19:02   ` Ananyev, Konstantin
2014-12-23  9:56     ` Liang, Cunming
2014-12-11  2:04 ` [dpdk-dev] [RFC PATCH 2/7] mempool: use linear-tid as mempool cache index Cunming Liang
2014-12-11  2:04 ` [dpdk-dev] [RFC PATCH 3/7] ring: use linear-tid as ring debug stats index Cunming Liang
2014-12-11  2:04 ` [dpdk-dev] [RFC PATCH 4/7] eal: add simple API for multi-pthread Cunming Liang
2014-12-11  2:04 ` [dpdk-dev] [RFC PATCH 5/7] testpmd: support multi-pthread mode Cunming Liang
2014-12-11  2:04 ` [dpdk-dev] [RFC PATCH 6/7] sample: add new sample for multi-pthread Cunming Liang
2014-12-11  2:04 ` [dpdk-dev] [RFC PATCH 7/7] eal: macro for cpuset w/ or w/o CPU_ALLOC Cunming Liang
2014-12-11  2:54 ` [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore Jayakumar, Muthurajan
2014-12-11  9:56 ` Walukiewicz, Miroslaw
2014-12-12  5:44   ` Liang, Cunming
2014-12-15 11:10     ` Walukiewicz, Miroslaw
2014-12-15 11:53       ` Liang, Cunming
2014-12-18 12:20         ` Walukiewicz, Miroslaw
2014-12-18 14:32           ` Bruce Richardson
2014-12-18 15:11             ` Olivier MATZ
2014-12-18 16:04               ` Bruce Richardson [this message]
2014-12-18 16:15           ` Stephen Hemminger
2014-12-19  1:28           ` Liang, Cunming
2014-12-19 10:03             ` Bruce Richardson
2014-12-22  1:51               ` Liang, Cunming
2014-12-22  9:46                 ` Bruce Richardson
2014-12-22 10:01                   ` Walukiewicz, Miroslaw
2014-12-23  9:45                     ` Liang, Cunming
2014-12-22 18:28                   ` Stephen Hemminger
2014-12-23  9:19                     ` Walukiewicz, Miroslaw
2014-12-23  9:23                       ` Bruce Richardson
2014-12-23  9:51                     ` Liang, Cunming
2015-01-08 17:05                       ` Ananyev, Konstantin
2015-01-08 17:23                         ` Richardson, Bruce
2015-01-09  9:51                           ` Liang, Cunming
2015-01-09  9:40                         ` Liang, Cunming
2015-01-09 11:52                           ` Ananyev, Konstantin
2015-01-09  9:45                         ` Liang, Cunming

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141218160400.GB2460@bricha3-MOBL3 \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).