From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 2E03B1B171 for ; Thu, 18 Jan 2018 13:14:08 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Jan 2018 04:14:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,377,1511856000"; d="scan'208";a="23391858" Received: from irsmsx151.ger.corp.intel.com ([163.33.192.59]) by fmsmga001.fm.intel.com with ESMTP; 18 Jan 2018 04:14:07 -0800 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.236]) by IRSMSX151.ger.corp.intel.com ([169.254.4.108]) with mapi id 14.03.0319.002; Thu, 18 Jan 2018 12:14:05 +0000 From: "Ananyev, Konstantin" To: Simon Guo CC: "Lu, Wenzhuo" , "dev@dpdk.org" , Thomas Monjalon Thread-Topic: [dpdk-dev] [PATCH v5] app/testpmd: add option ring-bind-lcpu to bind Q with CPU Thread-Index: AQHTkAKYeARCMBjEjEKoKvdZu4XHY6N5iXGQ Date: Thu, 18 Jan 2018 12:14:05 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772588628010D@irsmsx105.ger.corp.intel.com> References: <6A0DE07E22DDAD4C9103DF62FEBC09093B7109A8@shsmsx102.ccr.corp.intel.com> <1515810914-18762-1-git-send-email-wei.guo.simon@gmail.com> <2601191342CEEE43887BDE71AB9772588627E492@irsmsx105.ger.corp.intel.com> <20180117091337.GA30690@simonLocalRHEL7.x64> In-Reply-To: <20180117091337.GA30690@simonLocalRHEL7.x64> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNjkyMGU3ODMtNWQ3NS00ZTAzLTlhODEtNzIyYzdjNWI1YTBkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6InNjWThQdktYcnZMWVwvV2o1eXBOVUZ5aGMwVFVPVEoxYTU5WURVYUNGRlpjPSJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v5] app/testpmd: add option ring-bind-lcpu to bind Q with CPU X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jan 2018 12:14:09 -0000 Hi Simon, >=20 > Hi, Konstantin, > On Tue, Jan 16, 2018 at 12:38:35PM +0000, Ananyev, Konstantin wrote: > > > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of wei.guo.simon@gm= ail.com > > > Sent: Saturday, January 13, 2018 2:35 AM > > > To: Lu, Wenzhuo > > > Cc: dev@dpdk.org; Thomas Monjalon ; Simon Guo > > > Subject: [dpdk-dev] [PATCH v5] app/testpmd: add option ring-bind-lcpu= to bind Q with CPU > > > > > > From: Simon Guo > > > > > > Currently the rx/tx queue is allocated from the buffer pool on socket= of: > > > - port's socket if --port-numa-config specified > > > - or ring-numa-config setting per port > > > > > > All the above will "bind" queue to single socket per port configurati= on. > > > But it can actually archieve better performance if one port's queue c= an > > > be spread across multiple NUMA nodes, and the rx/tx queue is allocate= d > > > per lcpu socket. > > > > > > This patch adds a new option "--ring-bind-lcpu"(no parameter). With > > > this, testpmd can utilize the PCI-e bus bandwidth on another NUMA > > > nodes. > > > > > > When --port-numa-config or --ring-numa-config option is specified, th= is > > > --ring-bind-lcpu option will be suppressed. > > > > Instead of introducing one more option - wouldn't it be better to > > allow user manually to define flows and assign them to particular lcore= s? > > Then the user will be able to create any FWD configuration he/she likes= . > > Something like: > > lcore X add flow rxq N,Y txq M,Z > > > > Which would mean - on lcore X recv packets from port=3DN, rx_queue=3DY, > > and send them through port=3DM,tx_queue=3DZ. > Thanks for the comment. > Will it be a too compliated solution for user since it will need to defin= e > specifically for each lcore? We might have hundreds of lcores in current > modern platforms. Why for all lcores? Only for ones that will do packet forwarding. Also if configuration becomes too complex(/big) to be done manually user can write a script that will generate set of testpmd commands to achieve desired layout. Konstantin