From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C57E4A3222 for ; Mon, 21 Oct 2019 15:56:13 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4CF661BF54; Mon, 21 Oct 2019 15:56:12 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 1E29A1BF54 for ; Mon, 21 Oct 2019 15:56:10 +0200 (CEST) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Oct 2019 06:56:10 -0700 X-IronPort-AV: E=Sophos;i="5.67,323,1566889200"; d="scan'208";a="191118134" Received: from bricha3-mobl.ger.corp.intel.com ([10.237.221.95]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 21 Oct 2019 06:56:07 -0700 Date: Mon, 21 Oct 2019 14:56:04 +0100 From: Bruce Richardson To: "Varghese, Vipin" Cc: "Loftus, Ciara" , 'Stephen Hemminger' , "'dev@dpdk.org'" , "Ye, Xiaolong" , "Laatz, Kevin" , "Yigit, Ferruh" Message-ID: <20191021135604.GE942@bricha3-MOBL.ger.corp.intel.com> References: <20190930164205.19419-1-ciara.loftus@intel.com> <20190930164205.19419-3-ciara.loftus@intel.com> <20190930101137.4919f93e@hermes.lan> <74F120C019F4A64C9B78E802F6AD4CC279226C6C@IRSMSX106.ger.corp.intel.com> <74F120C019F4A64C9B78E802F6AD4CC27924737D@IRSMSX106.ger.corp.intel.com> <4C9E0AB70F954A408CC4ADDBF0F8FA7D4D3DCF94@BGSMSX101.gar.corp.intel.com> <20191021130416.GB942@bricha3-MOBL.ger.corp.intel.com> <4C9E0AB70F954A408CC4ADDBF0F8FA7D4D3DCFC2@BGSMSX101.gar.corp.intel.com> <20191021131718.GD942@bricha3-MOBL.ger.corp.intel.com> <4C9E0AB70F954A408CC4ADDBF0F8FA7D4D3DD007@BGSMSX101.gar.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4C9E0AB70F954A408CC4ADDBF0F8FA7D4D3DD007@BGSMSX101.gar.corp.intel.com> User-Agent: Mutt/1.12.1 (2019-06-15) Subject: Re: [dpdk-dev] [PATCH v2 2/3] net/af_xdp: support pinning of IRQs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, Oct 21, 2019 at 02:45:05PM +0100, Varghese, Vipin wrote: > Hi Bruce, > > snipped > > > This ability to have the driver pin the interrupts for the > > > > user would be a big timesaver for developers too, who may be > > > > constantly re- running apps when testing. > > > Here my understanding, user can not or should not pass DPDK cores for > > interrupt pinning. So should we ask the driver to fetch `rte_eal_configuration` > > and ensure the same? > > > > > > > Actually I disagree. I think the user should pass the cores for interrupt pinning, > I agree to this. > > > because unlike other PMDs it is perfectly valid to have the interrupts pinned to > > dedicated cores separate from those used by DPDK. > My point is the same, but not on DPDK DP or service cores. > > > > > Or taking another example, suppose the app takes 8 cores in the coremask, but > > only one of those cores is to be used for I/O, what cores should the driver pin > > the interrupts to? > It can be cores on machine (guest or host) which is not used by DPDK. > > It probably should be the same core used for I/O, but the > > driver can't know which cores will be for that, or alternatively the user might > > want to use AF_XDP split across two cores, in which case any core on the > > system might be the intended one for interrupts. > I agree to the patch, only difference in dev->probe function, should not there be validation to ensure the IRQ core is not DPDK core or Service core as the Interface is owned by kernel and for non matched eBPF skb buff is used by kernel. > No. Since the 5.4 kernel, it's a usable configuration to run both the kernel and userspace portions of AF_XDP on the same core. In order to get best performance with a fixed number of cores, this setup - with interrupts pinned to the polling RX core - is now recommended. [For absolute best perf using any number of cores, a separate interrupt core may still work best, though] /Bruce