From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ee0-x230.google.com (mail-ee0-x230.google.com [IPv6:2a00:1450:4013:c00::230]) by dpdk.org (Postfix) with ESMTP id E828768E8 for ; Wed, 12 Feb 2014 12:46:40 +0100 (CET) Received: by mail-ee0-f48.google.com with SMTP id t10so4252269eei.7 for ; Wed, 12 Feb 2014 03:48:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:content-transfer-encoding:thread-index :content-language; bh=OsFWZkl5afYVx4qqT4Swl2CRs0kokTLvrX2riu0KnGs=; b=MTDKjlMqblFQ5ZFSLBENnB64gwfcAbWKRRESNKMtsjHFTQjVcrSo+SjCVddHEnANKI B+fGiHnu9rygxcnoushGynOg+doQv1DtF8JIrbBDnD9Lvv0zqHnSMunNIL0vX5I7MmzX ry2LAHAmYLFJk7fXSxe+Eq1+X+edx1ZJGhr+wgDzxRwD/xBoTUOqIzJfwXbBpYYJFhDT 1y0UOFbRHO8wZJ6WyGQQz0/fQ1859SPW709iY+dBT9xdbxpBhpb4Ez3yyzWs0P/ZvTzy dGFK7hjaK4JLht8CL/3p4fPjzFVzvqm3Vzw8x24dAkI+Bv0zfAdEw+zy8aUS0aO4QF+o mx7g== X-Received: by 10.14.110.68 with SMTP id t44mr3508577eeg.74.1392205683571; Wed, 12 Feb 2014 03:48:03 -0800 (PST) Received: from elrlaptop ([109.64.22.48]) by mx.google.com with ESMTPSA id j41sm80176864eeg.10.2014.02.12.03.47.57 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 12 Feb 2014 03:48:01 -0800 (PST) From: "Etai Lev Ran" To: "'Prashant Upadhyaya'" References: In-Reply-To: Date: Wed, 12 Feb 2014 13:47:57 +0200 Message-ID: <026601cf27e8$49bf1830$dd3d4890$@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Outlook 14.0 Thread-Index: AQLuYlzM410cRTj90mDif8LTqIfiZJhzClBQ Content-Language: en-us Cc: dev@dpdk.org Subject: Re: [dpdk-dev] NUMA CPU Sockets and DPDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Feb 2014 11:46:41 -0000 Hi Prashant, Based on our experience, using DPDK cross CPU sockets may indeed result in some performance degradation (~10% for our application vs. staying in socket. YMMV based on HW, application structure, etc.). Regarding CPU utilization on core 1, the one picking up traffic: perhaps I had misunderstood your comment, but I would expect it to always be close to 100% since it's polling the device via the PMD and not driven by interrupts. Regards, Etai -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Prashant Upadhyaya Sent: Wednesday, February 12, 2014 1:28 PM To: dev@dpdk.org Subject: [dpdk-dev] NUMA CPU Sockets and DPDK Hi guys, What has been your experience of using DPDK based app's in NUMA mode with multiple sockets where some cores are present on one socket and other cores on some other socket. I am migrating my application from one intel machine with 8 cores, all in one socket to a 32 core machine where 16 cores are in one socket and 16 other cores in the second socket. My core 0 does all initialization for mbuf's, nic ports, queues etc. and uses SOCKET_ID_ANY for socket related parameters. The usecase works, but I think I am running into performance issues on the 32 core machine. The lscpu output on my 32 core machine shows the following - NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31 I am using core 1 to lift all the data from a single queue of an 82599EB port and I see that the cpu utilization for this core 1 is way too high even for lifting traffic of 1 Gbps with packet size of 650 bytes. In general, does one need to be careful in working with multiple sockets and so forth, any comments would be helpful. Regards -Prashant ============================================================================ === Please refer to http://www.aricent.com/legal/email_disclaimer.html for important disclosures regarding this electronic communication. ============================================================================ ===