DPDK patches and discussions
 help / color / mirror / Atom feed
From: Rajesh R <rajesh.arr@gmail.com>
To: "De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] how to change binding of NIC ports to NUMA nodes
Date: Fri, 4 Sep 2015 22:59:17 +0530	[thread overview]
Message-ID: <CAAh6R8+j8-TxyE-+3jgS-QwVY4GkvyTBXf1nWF+r-Tvo6BhXSw@mail.gmail.com> (raw)
In-Reply-To: <E115CCD9D858EF4F90C690B0DCB4D8973C7F08E0@IRSMSX108.ger.corp.intel.com>

Hi Pablo,

Thank you for the reply. I think I did not convey my query properly in my
question.

I agree that physical placement of NICs in PCIe slots decides the NUMA node
to which it is associated.
But in the server that I am experimenting(IBM system x 3850 x5 with 4 xeon
7560 processors) there are two IO hubs though which the PCIe slots are
connected to the CPU sockets.  4 of the PCIe slots are connected to 1 IOH
and 3 slots are connected to the second IOH. Each IOH is connected to 2 cpu
sockets- IOH1 is connected to sockets (0 and 1) . IOH2 is connected to
sockets (2 and 3). When I put 2 NICs in slots connecting to IOH1, both get
binded to socket 0. Similarly when I put 2 NICs in slots connecting to
IOH2, both get binded to socket 2.

My question is why none of the cards get binded to numa nodes(sockets) 1 or
3?

Is there something that I am missing in the physical architecture of the
server? is it that each IOH is directly connected to only 1 socket?

Regards
Rajesh






On Fri, Sep 4, 2015 at 12:50 PM, De Lara Guarch, Pablo <
pablo.de.lara.guarch@intel.com> wrote:

> Hi Rajesh,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Rajesh R
> > Sent: Friday, September 04, 2015 5:29 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] how to change binding of NIC ports to NUMA nodes
> >
> > Hi,
> >
> > I am trying an application based on dpdk on a 4- processor server i.e. 4
> > numa nodes.
> > The server is having with 4 NIC cards out of which 2 cards get binded to
> > numa node 0 and other 2 cards get binded to numa node 2 (as per the
> > /sys/pci/.../numa_node for each card)
> >
> >
> > How to evenly distribute the cards to all the numa nodes so that one card
> > each gets binded to one numa node?
> >
> > Can we control the binding from dpdk, either pmd_ixgbe or igb_uio?
>
> The drivers cannot change the numa node where your NICs are,
> as those nodes are associated to the different physical sockets (CPU and
> memory)
> that you have on your platform, and your NICs are connected physically
> to these sockets via the PCI slots.
>
> So, if you want to change the numa node, you will have to move the NIC(s)
> to another PCI slot that is connected to a different socket.
> Look at the user guide of your platform to find out which PCI slots are
> connected to which socket.
>
> Regards,
>
> Pablo
> >
> >
> > --
> > Regards
> >
> > Rajesh R
>



-- 
Regards

Rajesh R

      reply	other threads:[~2015-09-04 17:29 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-04  4:29 Rajesh R
2015-09-04  7:20 ` De Lara Guarch, Pablo
2015-09-04 17:29   ` Rajesh R [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAh6R8+j8-TxyE-+3jgS-QwVY4GkvyTBXf1nWF+r-Tvo6BhXSw@mail.gmail.com \
    --to=rajesh.arr@gmail.com \
    --cc=dev@dpdk.org \
    --cc=pablo.de.lara.guarch@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).