From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 53B376A87 for ; Thu, 11 Dec 2014 01:36:48 +0100 (CET) Received: from [2001:470:8:a08:215:ff:fecc:4872] (helo=localhost) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.63) (envelope-from ) id 1XyriS-00026M-A8; Wed, 10 Dec 2014 19:35:18 -0500 Date: Wed, 10 Dec 2014 19:34:16 -0500 From: Neil Horman To: Stephen Hemminger Message-ID: <20141211003416.GB24240@localhost.localdomain> References: <1404818184-29388-1-git-send-email-danielx.t.mrzyglod@intel.com> <20141208144545.GD3237@localhost.localdomain> <20141210144745.GC17040@localhost.localdomain> <20141210145455.GC1632@bricha3-MOBL3> <20141210161646.GE17040@localhost.localdomain> <20141210153837.1f30eed8@urahara> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20141210153837.1f30eed8@urahara> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-Spam-Status: No Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Dec 2014 00:36:57 -0000 On Wed, Dec 10, 2014 at 03:38:37PM -0800, Stephen Hemminger wrote: > On Wed, 10 Dec 2014 11:16:46 -0500 > Neil Horman wrote: > > > This really seems like a false savings to me. If an application intends to use > > multiple processes (which by all rights it seems like the use case that the dpdk > > is mostly designed for) then you need locking one way or another, and you've > > just made application coding harder, because the application now needs to know > > which functions might have internal critical sections that they need to provide > > locking for. > > The DPDK is not Linux. I never indicated that it was. > See the examples of how to route without using locks by doing asymmetric multiprocessing. > I.e queues are only serviced by one CPU. > Yes, I've seen it. > The cost of a locked operation (even uncontended) is often enough to drop > packet performance by several million PPS. Please re-read my note, I clearly stated that a single process use case was a valid one, but that didn't preclude the need to provide mutual exclusion internally to the api. Theres no reason that this locking can't be moved into the api, and the spinlock api itself either be defined to do locking at compile time, or defined out as empty macros based on a build variable (CONFIG_SINGLE_ACCESSOR or some such). That way you save the application the headache of having to guess which api calls need locking around them, and you still get maximal performance if the application being written can guarantee single accessor status to the dpdk library. Neil