From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-f173.google.com (mail-io0-f173.google.com [209.85.223.173]) by dpdk.org (Postfix) with ESMTP id 9F2EF5A53 for ; Sun, 18 Oct 2015 07:51:05 +0200 (CEST) Received: by iofz202 with SMTP id z202so17619965iof.2 for ; Sat, 17 Oct 2015 22:51:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=wxVnimXSBMvwB/cDH30nQl5aR/ras1C3BPxvTPc+8nQ=; b=y9GvBbm6u7to3OFQffMwhLdZeTl4oHKxkDjW3SW56Rr4GTuC2MZMIG56MHZ5uNzvdU bz/yKSv6YI6Zoo2IBYC7d2YHF/N9FX5IVivsy9tACvPyQmiBu8GDyW72ySK9qq+VPdNP MhyxDgH8+2mZDHc0WvOu5wdPdrbUvUEZhS2wcLBfM4KhEpixDIqyJAy6430Tx6O2jmDo M0bsQTbdBcrIUkWlFotvKDHqI0qSq5+MbjgZBI9iNh2Jwwx98IcWDNSmqsjBC5CXuHG8 bOIo7s90b75IX4Z3OOlFMC+IrkqTIu9Hh3ph9dzXGzeEy5JXnuIOpciHVqTquk4IcQsV qWCw== MIME-Version: 1.0 X-Received: by 10.107.17.199 with SMTP id 68mr3845989ior.81.1445147464980; Sat, 17 Oct 2015 22:51:04 -0700 (PDT) Received: by 10.79.33.206 with HTTP; Sat, 17 Oct 2015 22:51:04 -0700 (PDT) In-Reply-To: <20151016134320.GE9980@bricha3-MOBL3> References: <20151016134320.GE9980@bricha3-MOBL3> Date: Sun, 18 Oct 2015 14:51:04 +0900 Message-ID: From: Moon-Sang Lee To: Bruce Richardson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [Q] l2fwd in examples directory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Oct 2015 05:51:05 -0000 thanks bruce. I didn't know that PCI slots have direct socket affinity. is it static or configurable through PCI configuration space? well, my NUT, two node NUMA, seems always returns -1 on calling rte_eth_dev_socket_id(portid) whenever portid is 0, 1, or other values. I appreciate if you explain more about getting the affinity. p.s. I'm using intel Xeon processor and 1G NIC(82576). On Fri, Oct 16, 2015 at 10:43 PM, Bruce Richardson < bruce.richardson@intel.com> wrote: > On Thu, Oct 15, 2015 at 11:08:57AM +0900, Moon-Sang Lee wrote: > > There is codes as below in examples/l2fwd/main.c and I think > > rte_eth_dev_socket_id(portid) > > always returns -1(SOCKET_ID_ANY) since there is no association code > between > > port and > > lcore in the example codes. > > Can you perhaps clarify what you mean here. On modern NUMA systems, such > as those > from Intel :-), the PCI slots are directly connected to the CPU sockets, > so the > ethernet ports do indeed have a direct NUMA affinity. It's not something > that > the app needs to specify. > > /Bruce > > > (i.e. I need to find a matching lcore from > > lcore_queue_conf[] with portid > > and call rte_lcore_to_socket_id(lcore_id).) > > > > /* init one RX queue */ > > fflush(stdout); > > ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd, > > rte_eth_dev_socket_id(portid), > > NULL, > > l2fwd_pktmbuf_pool); > > if (ret < 0) > > rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, > > port=%u\n", > > ret, (unsigned) portid); > > > > It works fine even though memory is allocated in different NUMA node. > But I > > wonder there is > > a DPDK API that associates inlcore to port internally thus > > rte_eth_devices[portid].pci_dev->numa_node > > contains proper node. > > > > > > -- > > Moon-Sang Lee, SW Engineer > > Email: sang0627@gmail.com > > Wisdom begins in wonder. *Socrates* > -- Moon-Sang Lee, SW Engineer Email: sang0627@gmail.com Wisdom begins in wonder. *Socrates*