From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id EADF768BA for ; Wed, 30 Apr 2014 06:56:13 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 29 Apr 2014 21:56:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,956,1389772800"; d="scan'208";a="523552869" Received: from orsmsx102.amr.corp.intel.com ([10.22.225.129]) by fmsmga001.fm.intel.com with ESMTP; 29 Apr 2014 21:56:16 -0700 Received: from orsmsx159.amr.corp.intel.com (10.22.240.24) by ORSMSX102.amr.corp.intel.com (10.22.225.129) with Microsoft SMTP Server (TLS) id 14.3.123.3; Tue, 29 Apr 2014 21:56:16 -0700 Received: from orsmsx112.amr.corp.intel.com ([169.254.12.115]) by ORSMSX159.amr.corp.intel.com ([169.254.11.54]) with mapi id 14.03.0123.003; Tue, 29 Apr 2014 21:56:16 -0700 From: "Jayakumar, Muthurajan" To: Hamid Ramazani , Thomas Monjalon , "dev@dpdk.org" Thread-Topic: [dpdk-dev] packet loss: multi-queue (RSS enabled) Thread-Index: AQHPZC6LeguzTYC1CEmZ90t2g8N7MJspmBwA Date: Wed, 30 Apr 2014 04:56:15 +0000 Message-ID: <5D695A7F6F10504DBD9B9187395A21797D0BB105@ORSMSX112.amr.corp.intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.22.254.139] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] packet loss: multi-queue (RSS enabled) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Apr 2014 04:56:14 -0000 Hi,=20 Please find the attached paper http://kfall.net/ucbpage/papers/snc.pdf Figures 4 and 5 refers about the degradation when the # of queues are incre= ased.=20 It refers sweet spot as 2 to 4 queues.=20 Have you please verified with smaller # of queues please? Thanks,=20 -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hamid Ramazani Sent: Tuesday, April 29, 2014 9:42 PM To: Thomas Monjalon; dev@dpdk.org Subject: [dpdk-dev] packet loss: multi-queue (RSS enabled) Hi, I tried a lot (more than a week) to solve the problem myself and not to bot= her the list, but I didn't succeed. Maybe other people have the same proble= m. I have a simple program attached. It is intended for simple packet capturin= g; captures from interface and writes to memory, and frees the memory in th= e next loop. I have a 10G 82599EB Intel SFI/SFP+ network interface for capturing packets= . As you may know, this network card supports up to 128 RSS queues. This is just a test, so the packets being sent at 820Kpps (kilo packet per = second). Each packet is 1500B (fixed size); it is 9.16 Gbit per second. Of course when the packet per seconds goes up and packet size goes down (e.= g. 400B per packet), it gets much worse. When using one queue to receive, I receive all the packets, with no loss. When I use more than one queue (e.g. 8 queues), with each thread running on= a dedicated core, I have a considerable amount of loss. Please note that: 1. The computer has 12 * 2.67GHz cores, and it does nothing else but captur= ing packets. The CPU is Intel Xeon X5650. 2. The operating system is Ubuntu 12.04.3 LTS Attached file includes: main.h main.c Makefile ./run.sh It is configured to be run with 8 queues. If you want to change the number of receive queues, please: 1. in main.c, change the value assigned to nb_rx_q_of_dev to the desired va= lue. 2. change core mask in run.sh file (since there is SKIP_MASTER, you should = give a core containing one more CPU than given number of queues). I think there might be following problems: 1. the port configuration is not fine. 2. freeing memory has a considerable amount of overhead, and may be I shoul= dn't do that. But If I don't the pool will be full, won't be? Is there any = other way? Please help. Thanks a lot in advance for your help and comments. All the Best, Hamid