From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id E45B21FE for ; Tue, 15 Jul 2014 22:41:07 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 15 Jul 2014 13:41:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.01,668,1400050800"; d="scan'208";a="562249315" Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34]) by fmsmga001.fm.intel.com with ESMTP; 15 Jul 2014 13:41:24 -0700 Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server (TLS) id 14.3.123.3; Tue, 15 Jul 2014 13:41:24 -0700 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server (TLS) id 14.3.123.3; Tue, 15 Jul 2014 13:41:24 -0700 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.204]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.188]) with mapi id 14.03.0123.003; Wed, 16 Jul 2014 04:41:23 +0800 From: "Zhou, Danny" To: Neil Horman , "John W. Linville" Thread-Topic: [dpdk-dev] [PATCH v2] librte_pmd_packet: add PMD for AF_PACKET-based virtual devices Thread-Index: AQHPn5Gwvcd4A0wcTkeObxakesCY6JugPv4wgABJUgCAABzngIAAbPMAgACHFeA= Date: Tue, 15 Jul 2014 20:41:23 +0000 Message-ID: References: <1405024369-30058-1-git-send-email-linville@tuxdriver.com> <1405362290-6753-1-git-send-email-linville@tuxdriver.com> <20140715121743.GA14273@localhost.localdomain> <20140715140111.GA26012@tuxdriver.com> <20140715203108.GA20273@localhost.localdomain> In-Reply-To: <20140715203108.GA20273@localhost.localdomain> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v2] librte_pmd_packet: add PMD for AF_PACKET-based virtual devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 20:41:08 -0000 > -----Original Message----- > From: Neil Horman [mailto:nhorman@tuxdriver.com] > Sent: Wednesday, July 16, 2014 4:31 AM > To: John W. Linville > Cc: Zhou, Danny; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH v2] librte_pmd_packet: add PMD for > AF_PACKET-based virtual devices >=20 > On Tue, Jul 15, 2014 at 10:01:11AM -0400, John W. Linville wrote: > > On Tue, Jul 15, 2014 at 08:17:44AM -0400, Neil Horman wrote: > > > On Tue, Jul 15, 2014 at 12:15:49AM +0000, Zhou, Danny wrote: > > > > According to my performance measurement results for 64B small > > > > packet, 1 queue perf. is better than 16 queues (1.35M pps vs. > > > > 0.93M > > > > pps) which make sense to me as for 16 queues case more CPU cycles > > > > (16 queues' 87% vs. 1 queue' 80%) in kernel land needed for > > > > NAPI-enabled ixgbe driver to switch between polling and interrupt > > > > modes in order to service per-queue rx interrupts, so more context > > > > switch overhead involved. Also, since the > > > > eth_packet_rx/eth_packet_tx routines involves in two memory copies > > > > between DPDK mbuf and pbuf for each packet, it can hardly achieve > > > > high performance unless packet are directly DMA to mbuf which needs= ixgbe > driver to support. > > > > > > I thought 16 queues would be spread out between as many cpus as you > > > had though, obviating the need for context switches, no? > > > > I think Danny is testing the single CPU case. Having more queues than > > CPUs probably does not provide any benefit. > > > Ah, yes, generally speaking, you never want nr_cpus < nr_queues. Otherwi= se you'll > just be fighting yourself. >=20 It is true for interrupt based NIC driver and this AF_PACKET based PMD beca= use it depends=20 on kernel NIC driver. But for poll-mode based DPDK native NIC driver, you c= an have a cpu pinning to to a core polling multiple queues on a NIC or queues on different NICs, at = the cost of more power consumption or wasted CPU cycles busying waiting packets. > > It would be cool to hack the DPDK memory management to work directly > > out of the mmap'ed AF_PACKET buffers. But at this point I don't have > > enough knowledge of DPDK internals to know if that is at all > > reasonable... > > > > John > > > > P.S. Danny, have you run any performance tests on the PCAP driver? > > > > -- > > John W. Linville Someday the world will need a hero, and you > > linville@tuxdriver.com might be all we have. Be ready. > >