From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-f176.google.com (mail-io0-f176.google.com [209.85.223.176]) by dpdk.org (Postfix) with ESMTP id 36EDB592B for ; Mon, 19 Oct 2015 09:39:42 +0200 (CEST) Received: by iodv82 with SMTP id v82so179089849iod.0 for ; Mon, 19 Oct 2015 00:39:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=mGk0Dmnb9icWXH88jO7csHLHUoLz5ay6fGvUFCjw/lo=; b=0iSNK4wn/olJXSJVQZDYRqtB/c/lEdCqM0nthaudbIZRj54Rk/g93krpNckbcn092a xJCfryPvY67ze7ZxS0pkgGNPyMV/i8d+HXHex2mzEPBqRQ7drS7HuGCsiwAEEmRsZbeM /WoOwr3EZ6V0LlSXexz2e/4j6CmGMOkNDWTG49xYNOKBCVygus6ZLLtX+Quug3/Crpqs 3ZrAbrAlJjYvohGqDu4h3eJnczCr07tc1hdWFg2FkWkQQyvwcUAU0t8Ws3E37kOXYbCU JfS+n6iv14+d7UpBVawBWOHt8Nntl6Xm3Tzt2rq1DCgOgFIc1VOvPHm86Rbw+COayDOR 0c5g== MIME-Version: 1.0 X-Received: by 10.107.159.198 with SMTP id i189mr27447803ioe.59.1445240381652; Mon, 19 Oct 2015 00:39:41 -0700 (PDT) Received: by 10.79.33.206 with HTTP; Mon, 19 Oct 2015 00:39:41 -0700 (PDT) In-Reply-To: References: <20151016134320.GE9980@bricha3-MOBL3> Date: Mon, 19 Oct 2015 16:39:41 +0900 Message-ID: From: Moon-Sang Lee To: Bruce Richardson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [Q] l2fwd in examples directory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Oct 2015 07:39:42 -0000 My NUT has Xeon L5520 that is based on Nehalem microarchitecture. Does Nehalem supports PCIe interface on chipset? Anyhow, 'lstopo' shows as below and it seems that my PCI devices are connected to socket #0. I'm still wondering why rte_eth_dev_socket_id(portid) always returns -1. mslee@myhost:~$ lstopo Machine (31GB) NUMANode L#0 (P#0 16GB) + Socket L#0 + L3 L#0 (8192KB) L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 PU L#0 (P#0) PU L#1 (P#8) L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 PU L#2 (P#2) PU L#3 (P#10) L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 PU L#4 (P#4) PU L#5 (P#12) L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 PU L#6 (P#6) PU L#7 (P#14) NUMANode L#1 (P#1 16GB) + Socket L#1 + L3 L#1 (8192KB) L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 PU L#8 (P#1) PU L#9 (P#9) L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 PU L#10 (P#3) PU L#11 (P#11) L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 PU L#12 (P#5) PU L#13 (P#13) L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 PU L#14 (P#7) PU L#15 (P#15) HostBridge L#0 PCIBridge PCI 14e4:163b Net L#0 "em1" PCI 14e4:163b Net L#1 "em2" PCIBridge PCI 1000:0058 Block L#2 "sda" Block L#3 "sdb" PCIBridge PCIBridge PCIBridge PCI 8086:10e8 PCI 8086:10e8 PCIBridge PCI 8086:10e8 PCI 8086:10e8 PCIBridge PCI 102b:0532 PCI 8086:3a20 PCI 8086:3a26 Block L#4 "sr0" mslee@myhost:~$ On Sun, Oct 18, 2015 at 2:51 PM, Moon-Sang Lee wrote: > > thanks bruce. > > I didn't know that PCI slots have direct socket affinity. > is it static or configurable through PCI configuration space? > well, my NUT, two node NUMA, seems always returns -1 on calling > rte_eth_dev_socket_id(portid) whenever portid is 0, 1, or other values. > I appreciate if you explain more about getting the affinity. > > p.s. > I'm using intel Xeon processor and 1G NIC(82576). > > > > > On Fri, Oct 16, 2015 at 10:43 PM, Bruce Richardson < > bruce.richardson@intel.com> wrote: > >> On Thu, Oct 15, 2015 at 11:08:57AM +0900, Moon-Sang Lee wrote: >> > There is codes as below in examples/l2fwd/main.c and I think >> > rte_eth_dev_socket_id(portid) >> > always returns -1(SOCKET_ID_ANY) since there is no association code >> between >> > port and >> > lcore in the example codes. >> >> Can you perhaps clarify what you mean here. On modern NUMA systems, such >> as those >> from Intel :-), the PCI slots are directly connected to the CPU sockets, >> so the >> ethernet ports do indeed have a direct NUMA affinity. It's not something >> that >> the app needs to specify. >> >> /Bruce >> >> > (i.e. I need to find a matching lcore from >> > lcore_queue_conf[] with portid >> > and call rte_lcore_to_socket_id(lcore_id).) >> > >> > /* init one RX queue */ >> > fflush(stdout); >> > ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd, >> > rte_eth_dev_socket_id(portid), >> > NULL, >> > l2fwd_pktmbuf_pool); >> > if (ret < 0) >> > rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, >> > port=%u\n", >> > ret, (unsigned) portid); >> > >> > It works fine even though memory is allocated in different NUMA node. >> But I >> > wonder there is >> > a DPDK API that associates inlcore to port internally thus >> > rte_eth_devices[portid].pci_dev->numa_node >> > contains proper node. >> > >> > >> > -- >> > Moon-Sang Lee, SW Engineer >> > Email: sang0627@gmail.com >> > Wisdom begins in wonder. *Socrates* >> > > > > -- > Moon-Sang Lee, SW Engineer > Email: sang0627@gmail.com > Wisdom begins in wonder. *Socrates* > -- Moon-Sang Lee, SW Engineer Email: sang0627@gmail.com Wisdom begins in wonder. *Socrates*