From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 84D801FE for ; Tue, 15 Jul 2014 16:14:23 +0200 (CEST) Received: from uucp by smtp.tuxdriver.com with local-rmail (Exim 4.63) (envelope-from ) id 1X73Vc-0004zW-R3; Tue, 15 Jul 2014 10:15:08 -0400 Received: from linville-x1.hq.tuxdriver.com (localhost.localdomain [127.0.0.1]) by linville-x1.hq.tuxdriver.com (8.14.8/8.14.6) with ESMTP id s6FE1Cx0028928; Tue, 15 Jul 2014 10:01:12 -0400 Received: (from linville@localhost) by linville-x1.hq.tuxdriver.com (8.14.8/8.14.8/Submit) id s6FE1Cnb028927; Tue, 15 Jul 2014 10:01:12 -0400 Date: Tue, 15 Jul 2014 10:01:11 -0400 From: "John W. Linville" To: Neil Horman Message-ID: <20140715140111.GA26012@tuxdriver.com> References: <1405024369-30058-1-git-send-email-linville@tuxdriver.com> <1405362290-6753-1-git-send-email-linville@tuxdriver.com> <20140715121743.GA14273@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140715121743.GA14273@localhost.localdomain> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v2] librte_pmd_packet: add PMD for AF_PACKET-based virtual devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 14:14:23 -0000 On Tue, Jul 15, 2014 at 08:17:44AM -0400, Neil Horman wrote: > On Tue, Jul 15, 2014 at 12:15:49AM +0000, Zhou, Danny wrote: > > According to my performance measurement results for 64B small > > packet, 1 queue perf. is better than 16 queues (1.35M pps vs. 0.93M > > pps) which make sense to me as for 16 queues case more CPU cycles (16 > > queues' 87% vs. 1 queue' 80%) in kernel land needed for NAPI-enabled > > ixgbe driver to switch between polling and interrupt modes in order > > to service per-queue rx interrupts, so more context switch overhead > > involved. Also, since the eth_packet_rx/eth_packet_tx routines involves > > in two memory copies between DPDK mbuf and pbuf for each packet, > > it can hardly achieve high performance unless packet are directly > > DMA to mbuf which needs ixgbe driver to support. > > I thought 16 queues would be spread out between as many cpus as you had though, > obviating the need for context switches, no? I think Danny is testing the single CPU case. Having more queues than CPUs probably does not provide any benefit. It would be cool to hack the DPDK memory management to work directly out of the mmap'ed AF_PACKET buffers. But at this point I don't have enough knowledge of DPDK internals to know if that is at all reasonable... John P.S. Danny, have you run any performance tests on the PCAP driver? -- John W. Linville Someday the world will need a hero, and you linville@tuxdriver.com might be all we have. Be ready.