From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id D45F31FE for ; Tue, 15 Jul 2014 21:14:22 +0200 (CEST) Received: from uucp by smtp.tuxdriver.com with local-rmail (Exim 4.63) (envelope-from ) id 1X78Bw-0007kG-Qh; Tue, 15 Jul 2014 15:15:08 -0400 Received: from linville-x1.hq.tuxdriver.com (localhost.localdomain [127.0.0.1]) by linville-x1.hq.tuxdriver.com (8.14.8/8.14.6) with ESMTP id s6FJ8J4l006403; Tue, 15 Jul 2014 15:08:19 -0400 Received: (from linville@localhost) by linville-x1.hq.tuxdriver.com (8.14.8/8.14.8/Submit) id s6FJ8JXb006402; Tue, 15 Jul 2014 15:08:19 -0400 Date: Tue, 15 Jul 2014 15:08:19 -0400 From: "John W. Linville" To: "Zhou, Danny" Message-ID: <20140715190818.GD26012@tuxdriver.com> References: <1405024369-30058-1-git-send-email-linville@tuxdriver.com> <1405362290-6753-1-git-send-email-linville@tuxdriver.com> <20140715121743.GA14273@localhost.localdomain> <20140715140111.GA26012@tuxdriver.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v2] librte_pmd_packet: add PMD for AF_PACKET-based virtual devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 19:14:23 -0000 On Tue, Jul 15, 2014 at 03:40:56PM +0000, Zhou, Danny wrote: > > > -----Original Message----- > > From: John W. Linville [mailto:linville@tuxdriver.com] > > Sent: Tuesday, July 15, 2014 10:01 PM > > To: Neil Horman > > Cc: Zhou, Danny; dev@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH v2] librte_pmd_packet: add PMD for > > AF_PACKET-based virtual devices > > > > On Tue, Jul 15, 2014 at 08:17:44AM -0400, Neil Horman wrote: > > > On Tue, Jul 15, 2014 at 12:15:49AM +0000, Zhou, Danny wrote: > > > > According to my performance measurement results for 64B small > > > > packet, 1 queue perf. is better than 16 queues (1.35M pps vs. 0.93M > > > > pps) which make sense to me as for 16 queues case more CPU cycles > > > > (16 queues' 87% vs. 1 queue' 80%) in kernel land needed for > > > > NAPI-enabled ixgbe driver to switch between polling and interrupt > > > > modes in order to service per-queue rx interrupts, so more context > > > > switch overhead involved. Also, since the > > > > eth_packet_rx/eth_packet_tx routines involves in two memory copies > > > > between DPDK mbuf and pbuf for each packet, it can hardly achieve > > > > high performance unless packet are directly DMA to mbuf which needs ixgbe > > driver to support. > > > > > > I thought 16 queues would be spread out between as many cpus as you > > > had though, obviating the need for context switches, no? > > > > I think Danny is testing the single CPU case. Having more queues than CPUs > > probably does not provide any benefit. > > > > It would be cool to hack the DPDK memory management to work directly out of the > > mmap'ed AF_PACKET buffers. But at this point I don't have enough knowledge of > > DPDK internals to know if that is at all reasonable... > > > > John > > > > P.S. Danny, have you run any performance tests on the PCAP driver? > > No, I do not have PCAP driver performance results in hand. But I remember it is less than > 1M pps for 64B. Cool, good info...thanks! -- John W. Linville Someday the world will need a hero, and you linville@tuxdriver.com might be all we have. Be ready.