From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 7182E962A for ; Mon, 22 Dec 2014 18:33:13 +0100 (CET) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP; 22 Dec 2014 09:33:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,691,1406617200"; d="scan'208";a="502725779" Received: from cdoyle3-mobl2.ger.corp.intel.com ([10.252.0.117]) by orsmga003.jf.intel.com with SMTP; 22 Dec 2014 09:28:31 -0800 Received: by (sSMTP sendmail emulation); Mon, 22 Dec 2014 17:33:07 +0025 Date: Mon, 22 Dec 2014 17:33:07 +0000 From: Bruce Richardson To: Thomas Monjalon Message-ID: <20141222173306.GA11568@bricha3-MOBL3> References: <1419266844-4848-1-git-send-email-bruce.richardson@intel.com> <1698504.LDQKkGMxYZ@xps13> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1698504.LDQKkGMxYZ@xps13> Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.23 (2014-03-12) Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH RFC 0/3] DPDK ethdev callback support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Dec 2014 17:33:14 -0000 On Mon, Dec 22, 2014 at 06:02:53PM +0100, Thomas Monjalon wrote: > Hi Bruce, > > Callbacks, as hooks for applications, give more flexibility and are > generally a good idea. > In DPDK the main issue will be to avoid performance degradation. > I see you use "unlikely" for callback branching. > Could we reduce more the impact of this test by removing the queue array, > i.e. having port-wide callbacks instead of per-queue callbacks? I can give that a try, but I don't see it making much difference if any. The main thing to avoid with branching is branch mis-prediction, which should not be a problem here, as the user is not going to be adding or removing callbacks between each RX and TX call, making the branches highly predictable - i.e. always go the same way. The reason for using per-queue callbacks is that I think we can do more with it that way. For instance, if we want to do some additional processing or calculations on only IP traffic, then we can use hardware offloads on most NICs to steer the IP traffic to a separate queue and only apply the callbacks to that queue. If the performance is the same, I think we should therefore keep the per-queue version. > > 2014-12-22 16:47, Bruce Richardson: > > Future extensions: in future the ethdev library can be extended to provide > > a standard set of callbacks for use by drivers. > > Having callbacks for drivers seems strange to me. > If drivers need to accomplish some tasks, they do it by implementing an > ethdev service. New services are declared for new needs. > Callbacks are the reverse logic. Why should it be needed? Typo, I meant for applications! Drivers don't need them indeed. > > > For now this patch set is RFC and still needs additional work for creating > > a remove function for callbacks and to add in additional testing code. > > Since this adds in new code into the critical data path, I have run some > > performance tests using testpmd with the ixgbe vector drivers (i.e. the > > fastest, fast-path we have :-) ). Performance drops due to this patch > > seems minimal to non-existant, rough tests on my system indicate a drop > > of perhaps 1%. > > > > All feedback welcome. > > It would be good to have more performance tests with different configurations. Sure, if you have ideals for specific tests you'd like to see I'll try and get some numbers. What I did look as was the performance impact for this patch without actually putting in place any callbacks, and the worst-case here is hardly noticable. For an empty callback, i.e. the pure callback overhead, the performance should still be in low single-digit percentages, but I'll test to confirm that. For other slower RX and TX paths, e.g. those using scattered packets, or with TX offloads, the performance impact will be even less. Regards, /Bruce > > Thanks > -- > Thomas