From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 0B87A805C for ; Mon, 15 Dec 2014 12:10:50 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP; 15 Dec 2014 03:10:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,579,1413270000"; d="scan'208";a="637685775" Received: from irsmsx110.ger.corp.intel.com ([163.33.3.25]) by fmsmga001.fm.intel.com with ESMTP; 15 Dec 2014 03:10:48 -0800 Received: from irsmsx104.ger.corp.intel.com ([169.254.5.209]) by IRSMSX110.ger.corp.intel.com ([169.254.15.55]) with mapi id 14.03.0195.001; Mon, 15 Dec 2014 11:10:47 +0000 From: "Walukiewicz, Miroslaw" To: "Liang, Cunming" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore Thread-Index: AQHQFOb74ekE1aZuS0qI47KzEksKLpyKJ0pQgAFMagCABQyCsA== Date: Mon, 15 Dec 2014 11:10:48 +0000 Message-ID: <7C4248CAE043B144B1CD242D275626532FE232BA@IRSMSX104.ger.corp.intel.com> References: <1418263490-21088-1-git-send-email-cunming.liang@intel.com> <7C4248CAE043B144B1CD242D275626532FE15298@IRSMSX104.ger.corp.intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Dec 2014 11:10:53 -0000 Hi Cunming,=20 The timers could be used by any application/library started as a standard p= thread.=20 Each pthread needs to have assigned some identifier same way as you are doi= ng it for mempools (the rte_linear_thread_id and rte_lcore_id are good exam= ples) I made series of patches extending the rte timers API to use with such kind= of identifier keeping existing API working also. I will send it soon.=20 Mirek > -----Original Message----- > From: Liang, Cunming > Sent: Friday, December 12, 2014 6:45 AM > To: Walukiewicz, Miroslaw; dev@dpdk.org > Subject: RE: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore >=20 > Thanks Mirek. That's a good point which wasn't mentioned in cover letter. > For 'rte_timer', I only expect it be used within the 'legacy per-lcore' p= thread. > I'm appreciate if you can give me some cases which can't use it to fit. > In case have to use 'rte_timer' in multi-pthread, there are some > prerequisites and limitations. > 1. Make sure thread local variable 'lcore_id' is set correctly (e.g. do p= thread > init by rte_pthread_prepare) > 2. As 'rte_timer' is not preemptable, when using rte_timer_manager/reset = in > multi-pthread, make sure they're not on the same core. >=20 > -Cunming >=20 > > -----Original Message----- > > From: Walukiewicz, Miroslaw > > Sent: Thursday, December 11, 2014 5:57 PM > > To: Liang, Cunming; dev@dpdk.org > > Subject: RE: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore > > > > Thank you Cunming for explanation. > > > > What about DPDK timers? They also depend on rte_lcore_id() to avoid > spinlocks. > > > > Mirek > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Cunming Liang > > > Sent: Thursday, December 11, 2014 3:05 AM > > > To: dev@dpdk.org > > > Subject: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore > > > > > > > > > Scope & Usage Scenario > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > > > > > > DPDK usually pin pthread per core to avoid task switch overhead. It g= ains > > > performance a lot, but it's not efficient in all cases. In some cases= , it may > > > too expensive to use the whole core for a lightweight workload. It's = a > > > reasonable demand to have multiple threads per core and each threads > > > share CPU > > > in an assigned weight. > > > > > > In fact, nothing avoid user to create normal pthread and using cgroup= to > > > control the CPU share. One of the purpose for the patchset is to clea= n the > > > gaps of using more DPDK libraries in the normal pthread. In addition,= it > > > demonstrates performance gain by proactive 'yield' when doing idle lo= op > > > in packet IO. It also provides several 'rte_pthread_*' APIs to easy l= ife. > > > > > > > > > Changes to DPDK libraries > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D > > > > > > Some of DPDK libraries must run in DPDK environment. > > > > > > # rte_mempool > > > > > > In rte_mempool doc, it mentions a thread not created by EAL must not > use > > > mempools. The root cause is it uses a per-lcore cache inside mempool. > > > And 'rte_lcore_id()' will not return a correct value. > > > > > > The patchset changes this a little. The index of mempool cache won't = be a > > > lcore_id. Instead of it, using a linear number generated by the alloc= ator. > > > For those legacy EAL per-lcore thread, it apply for an unique linear = id > > > during creation. For those normal pthread expecting to use > rte_mempool, it > > > requires to apply for a linear id explicitly. Now the mempool cache l= ooks > like > > > a per-thread base. The linear ID actually identify for the linear thr= ead id. > > > > > > However, there's another problem. The rte_mempool is not > preemptable. > > > The > > > problem comes from rte_ring, so talk together in next section. > > > > > > # rte_ring > > > > > > rte_ring supports multi-producer enqueue and multi-consumer > dequeue. > > > But it's > > > not preemptable. There's conversation talking about this before. > > > http://dpdk.org/ml/archives/dev/2013-November/000714.html > > > > > > Let's say there's two pthreads running on the same core doing enqueue > on > > > the > > > same rte_ring. If the 1st pthread is preempted by the 2nd pthread whi= le > it > > > has > > > already modified the prod.head, the 2nd pthread will spin until the 1= st > one > > > scheduled agian. It causes time wasting. In addition, if the 2nd pthr= ead > has > > > absolutely higer priority, it's more terrible. > > > > > > But it doesn't means we can't use. Just need to narrow down the > situation > > > when > > > it's used by multi-pthread on the same core. > > > - It CAN be used for any single-producer or single-consumer situation= . > > > - It MAY be used by multi-producer/consumer pthread whose scheduling > > > policy > > > are all SCHED_OTHER(cfs). User SHOULD aware of the performance > penalty > > > befor > > > using it. > > > - It MUST not be used by multi-producer/consumer pthread, while some > of > > > their > > > scheduling policies is SCHED_FIFO or SCHED_RR. > > > > > > > > > Performance > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > > > It loses performance by introducing task switching. On packet IO > perspective, > > > we can gain some back by improving IO effective rate. When the pthrea= d > do > > > idle > > > loop on an empty rx queue, it should proactively yield. We can also s= low > > > down > > > rx for a bit while to take more advantage of the bulk receiving in th= e next > > > loop. In practice, increase the rx ring size also helps to improve th= e > overrall > > > throughput. > > > > > > > > > Cgroup Control > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > > > Here's a simple example, there's four pthread doing packet IO on the > same > > > core. > > > We expect the CPU share rate is 1:1:2:4. > > > > mkdir /sys/fs/cgroup/cpu/dpdk > > > > mkdir /sys/fs/cgroup/cpu/dpdk/thread0 > > > > mkdir /sys/fs/cgroup/cpu/dpdk/thread1 > > > > mkdir /sys/fs/cgroup/cpu/dpdk/thread2 > > > > mkdir /sys/fs/cgroup/cpu/dpdk/thread3 > > > > cd /sys/fs/cgroup/cpu/dpdk > > > > echo 256 > thread0/cpu.shares > > > > echo 256 > thread1/cpu.shares > > > > echo 512 > thread2/cpu.shares > > > > echo 1024 > thread3/cpu.shares > > > > > > > > > -END- > > > > > > Any comments are welcome. > > > > > > Thanks > > > > > > *** BLURB HERE *** > > > > > > Cunming Liang (7): > > > eal: add linear thread id as pthread-local variable > > > mempool: use linear-tid as mempool cache index > > > ring: use linear-tid as ring debug stats index > > > eal: add simple API for multi-pthread > > > testpmd: support multi-pthread mode > > > sample: add new sample for multi-pthread > > > eal: macro for cpuset w/ or w/o CPU_ALLOC > > > > > > app/test-pmd/cmdline.c | 41 +++++ > > > app/test-pmd/testpmd.c | 84 ++++++++- > > > app/test-pmd/testpmd.h | 1 + > > > config/common_linuxapp | 1 + > > > examples/multi-pthread/Makefile | 57 ++++++ > > > examples/multi-pthread/main.c | 232 > > ++++++++++++++++++++++++ > > > examples/multi-pthread/main.h | 46 +++++ > > > lib/librte_eal/common/include/rte_eal.h | 15 ++ > > > lib/librte_eal/common/include/rte_lcore.h | 12 ++ > > > lib/librte_eal/linuxapp/eal/eal_thread.c | 282 > > > +++++++++++++++++++++++++++--- > > > lib/librte_mempool/rte_mempool.h | 22 +-- > > > lib/librte_ring/rte_ring.h | 6 +- > > > 12 files changed, 755 insertions(+), 44 deletions(-) > > > create mode 100644 examples/multi-pthread/Makefile > > > create mode 100644 examples/multi-pthread/main.c > > > create mode 100644 examples/multi-pthread/main.h > > > > > > -- > > > 1.8.1.4