From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id F04F11FE for ; Tue, 15 Jul 2014 22:30:33 +0200 (CEST) Received: from [209.188.62.162] (helo=localhost) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.63) (envelope-from ) id 1X79Na-0008Gi-Jf; Tue, 15 Jul 2014 16:31:20 -0400 Date: Tue, 15 Jul 2014 16:31:08 -0400 From: Neil Horman To: "John W. Linville" Message-ID: <20140715203108.GA20273@localhost.localdomain> References: <1405024369-30058-1-git-send-email-linville@tuxdriver.com> <1405362290-6753-1-git-send-email-linville@tuxdriver.com> <20140715121743.GA14273@localhost.localdomain> <20140715140111.GA26012@tuxdriver.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140715140111.GA26012@tuxdriver.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-Spam-Status: No Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v2] librte_pmd_packet: add PMD for AF_PACKET-based virtual devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 20:30:34 -0000 On Tue, Jul 15, 2014 at 10:01:11AM -0400, John W. Linville wrote: > On Tue, Jul 15, 2014 at 08:17:44AM -0400, Neil Horman wrote: > > On Tue, Jul 15, 2014 at 12:15:49AM +0000, Zhou, Danny wrote: > > > According to my performance measurement results for 64B small > > > packet, 1 queue perf. is better than 16 queues (1.35M pps vs. 0.93M > > > pps) which make sense to me as for 16 queues case more CPU cycles (16 > > > queues' 87% vs. 1 queue' 80%) in kernel land needed for NAPI-enabled > > > ixgbe driver to switch between polling and interrupt modes in order > > > to service per-queue rx interrupts, so more context switch overhead > > > involved. Also, since the eth_packet_rx/eth_packet_tx routines involves > > > in two memory copies between DPDK mbuf and pbuf for each packet, > > > it can hardly achieve high performance unless packet are directly > > > DMA to mbuf which needs ixgbe driver to support. > > > > I thought 16 queues would be spread out between as many cpus as you had though, > > obviating the need for context switches, no? > > I think Danny is testing the single CPU case. Having more queues > than CPUs probably does not provide any benefit. > Ah, yes, generally speaking, you never want nr_cpus < nr_queues. Otherwise you'll just be fighting yourself. > It would be cool to hack the DPDK memory management to work directly > out of the mmap'ed AF_PACKET buffers. But at this point I don't > have enough knowledge of DPDK internals to know if that is at all > reasonable... > > John > > P.S. Danny, have you run any performance tests on the PCAP driver? > > -- > John W. Linville Someday the world will need a hero, and you > linville@tuxdriver.com might be all we have. Be ready. >