* [dpdk-dev] [Q] l2fwd in examples directory
@ 2015-10-15 2:08 Moon-Sang Lee
2015-10-15 2:08 ` Moon-Sang Lee
0 siblings, 1 reply; 7+ messages in thread
From: Moon-Sang Lee @ 2015-10-15 2:08 UTC (permalink / raw)
To: dev
There is codes as below in examples/l2fwd/main.c and I think
rte_eth_dev_socket_id(portid)
always returns -1(SOCKET_ID_ANY) since there is no association code between
port and
lcore in the example codes. (i.e. I need to find a matching lcore from
lcore_queue_conf[] with portid
and call rte_lcore_to_socket_id(lcore_id).)
/* init one RX queue */
fflush(stdout);
ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
rte_eth_dev_socket_id(portid),
NULL,
l2fwd_pktmbuf_pool);
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d,
port=%u\n",
ret, (unsigned) portid);
It works fine even though memory is allocated in different NUMA node. But I
wonder there is
a DPDK API that associates inlcore to port internally,
thus rte_eth_devices[portid].pci_dev->numa_node contains proper node.
--
Moon-Sang Lee, SW Engineer
Email: sang0627@gmail.com
Wisdom begins in wonder. *Socrates*
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dpdk-dev] [Q] l2fwd in examples directory
2015-10-15 2:08 [dpdk-dev] [Q] l2fwd in examples directory Moon-Sang Lee
@ 2015-10-15 2:08 ` Moon-Sang Lee
2015-10-16 13:43 ` Bruce Richardson
0 siblings, 1 reply; 7+ messages in thread
From: Moon-Sang Lee @ 2015-10-15 2:08 UTC (permalink / raw)
To: dev
There is codes as below in examples/l2fwd/main.c and I think
rte_eth_dev_socket_id(portid)
always returns -1(SOCKET_ID_ANY) since there is no association code between
port and
lcore in the example codes. (i.e. I need to find a matching lcore from
lcore_queue_conf[] with portid
and call rte_lcore_to_socket_id(lcore_id).)
/* init one RX queue */
fflush(stdout);
ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
rte_eth_dev_socket_id(portid),
NULL,
l2fwd_pktmbuf_pool);
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d,
port=%u\n",
ret, (unsigned) portid);
It works fine even though memory is allocated in different NUMA node. But I
wonder there is
a DPDK API that associates inlcore to port internally thus
rte_eth_devices[portid].pci_dev->numa_node
contains proper node.
--
Moon-Sang Lee, SW Engineer
Email: sang0627@gmail.com
Wisdom begins in wonder. *Socrates*
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] [Q] l2fwd in examples directory
2015-10-15 2:08 ` Moon-Sang Lee
@ 2015-10-16 13:43 ` Bruce Richardson
2015-10-18 5:51 ` Moon-Sang Lee
0 siblings, 1 reply; 7+ messages in thread
From: Bruce Richardson @ 2015-10-16 13:43 UTC (permalink / raw)
To: Moon-Sang Lee; +Cc: dev
On Thu, Oct 15, 2015 at 11:08:57AM +0900, Moon-Sang Lee wrote:
> There is codes as below in examples/l2fwd/main.c and I think
> rte_eth_dev_socket_id(portid)
> always returns -1(SOCKET_ID_ANY) since there is no association code between
> port and
> lcore in the example codes.
Can you perhaps clarify what you mean here. On modern NUMA systems, such as those
from Intel :-), the PCI slots are directly connected to the CPU sockets, so the
ethernet ports do indeed have a direct NUMA affinity. It's not something that
the app needs to specify.
/Bruce
> (i.e. I need to find a matching lcore from
> lcore_queue_conf[] with portid
> and call rte_lcore_to_socket_id(lcore_id).)
>
> /* init one RX queue */
> fflush(stdout);
> ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
> rte_eth_dev_socket_id(portid),
> NULL,
> l2fwd_pktmbuf_pool);
> if (ret < 0)
> rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d,
> port=%u\n",
> ret, (unsigned) portid);
>
> It works fine even though memory is allocated in different NUMA node. But I
> wonder there is
> a DPDK API that associates inlcore to port internally thus
> rte_eth_devices[portid].pci_dev->numa_node
> contains proper node.
>
>
> --
> Moon-Sang Lee, SW Engineer
> Email: sang0627@gmail.com
> Wisdom begins in wonder. *Socrates*
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] [Q] l2fwd in examples directory
2015-10-16 13:43 ` Bruce Richardson
@ 2015-10-18 5:51 ` Moon-Sang Lee
2015-10-19 7:39 ` Moon-Sang Lee
0 siblings, 1 reply; 7+ messages in thread
From: Moon-Sang Lee @ 2015-10-18 5:51 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
thanks bruce.
I didn't know that PCI slots have direct socket affinity.
is it static or configurable through PCI configuration space?
well, my NUT, two node NUMA, seems always returns -1 on calling
rte_eth_dev_socket_id(portid) whenever portid is 0, 1, or other values.
I appreciate if you explain more about getting the affinity.
p.s.
I'm using intel Xeon processor and 1G NIC(82576).
On Fri, Oct 16, 2015 at 10:43 PM, Bruce Richardson <
bruce.richardson@intel.com> wrote:
> On Thu, Oct 15, 2015 at 11:08:57AM +0900, Moon-Sang Lee wrote:
> > There is codes as below in examples/l2fwd/main.c and I think
> > rte_eth_dev_socket_id(portid)
> > always returns -1(SOCKET_ID_ANY) since there is no association code
> between
> > port and
> > lcore in the example codes.
>
> Can you perhaps clarify what you mean here. On modern NUMA systems, such
> as those
> from Intel :-), the PCI slots are directly connected to the CPU sockets,
> so the
> ethernet ports do indeed have a direct NUMA affinity. It's not something
> that
> the app needs to specify.
>
> /Bruce
>
> > (i.e. I need to find a matching lcore from
> > lcore_queue_conf[] with portid
> > and call rte_lcore_to_socket_id(lcore_id).)
> >
> > /* init one RX queue */
> > fflush(stdout);
> > ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
> > rte_eth_dev_socket_id(portid),
> > NULL,
> > l2fwd_pktmbuf_pool);
> > if (ret < 0)
> > rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d,
> > port=%u\n",
> > ret, (unsigned) portid);
> >
> > It works fine even though memory is allocated in different NUMA node.
> But I
> > wonder there is
> > a DPDK API that associates inlcore to port internally thus
> > rte_eth_devices[portid].pci_dev->numa_node
> > contains proper node.
> >
> >
> > --
> > Moon-Sang Lee, SW Engineer
> > Email: sang0627@gmail.com
> > Wisdom begins in wonder. *Socrates*
>
--
Moon-Sang Lee, SW Engineer
Email: sang0627@gmail.com
Wisdom begins in wonder. *Socrates*
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] [Q] l2fwd in examples directory
2015-10-18 5:51 ` Moon-Sang Lee
@ 2015-10-19 7:39 ` Moon-Sang Lee
2015-10-19 7:51 ` Moon-Sang Lee
2015-10-19 9:34 ` Bruce Richardson
0 siblings, 2 replies; 7+ messages in thread
From: Moon-Sang Lee @ 2015-10-19 7:39 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
My NUT has Xeon L5520 that is based on Nehalem microarchitecture.
Does Nehalem supports PCIe interface on chipset?
Anyhow, 'lstopo' shows as below and it seems that my PCI devices are
connected to socket #0.
I'm still wondering why rte_eth_dev_socket_id(portid) always returns -1.
mslee@myhost:~$ lstopo
Machine (31GB)
NUMANode L#0 (P#0 16GB) + Socket L#0 + L3 L#0 (8192KB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#8)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#10)
L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
PU L#4 (P#4)
PU L#5 (P#12)
L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
PU L#6 (P#6)
PU L#7 (P#14)
NUMANode L#1 (P#1 16GB) + Socket L#1 + L3 L#1 (8192KB)
L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
PU L#8 (P#1)
PU L#9 (P#9)
L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
PU L#10 (P#3)
PU L#11 (P#11)
L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
PU L#12 (P#5)
PU L#13 (P#13)
L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
PU L#14 (P#7)
PU L#15 (P#15)
HostBridge L#0
PCIBridge
PCI 14e4:163b
Net L#0 "em1"
PCI 14e4:163b
Net L#1 "em2"
PCIBridge
PCI 1000:0058
Block L#2 "sda"
Block L#3 "sdb"
PCIBridge
PCIBridge
PCIBridge
PCI 8086:10e8
PCI 8086:10e8
PCIBridge
PCI 8086:10e8
PCI 8086:10e8
PCIBridge
PCI 102b:0532
PCI 8086:3a20
PCI 8086:3a26
Block L#4 "sr0"
mslee@myhost:~$
On Sun, Oct 18, 2015 at 2:51 PM, Moon-Sang Lee <sang0627@gmail.com> wrote:
>
> thanks bruce.
>
> I didn't know that PCI slots have direct socket affinity.
> is it static or configurable through PCI configuration space?
> well, my NUT, two node NUMA, seems always returns -1 on calling
> rte_eth_dev_socket_id(portid) whenever portid is 0, 1, or other values.
> I appreciate if you explain more about getting the affinity.
>
> p.s.
> I'm using intel Xeon processor and 1G NIC(82576).
>
>
>
>
> On Fri, Oct 16, 2015 at 10:43 PM, Bruce Richardson <
> bruce.richardson@intel.com> wrote:
>
>> On Thu, Oct 15, 2015 at 11:08:57AM +0900, Moon-Sang Lee wrote:
>> > There is codes as below in examples/l2fwd/main.c and I think
>> > rte_eth_dev_socket_id(portid)
>> > always returns -1(SOCKET_ID_ANY) since there is no association code
>> between
>> > port and
>> > lcore in the example codes.
>>
>> Can you perhaps clarify what you mean here. On modern NUMA systems, such
>> as those
>> from Intel :-), the PCI slots are directly connected to the CPU sockets,
>> so the
>> ethernet ports do indeed have a direct NUMA affinity. It's not something
>> that
>> the app needs to specify.
>>
>> /Bruce
>>
>> > (i.e. I need to find a matching lcore from
>> > lcore_queue_conf[] with portid
>> > and call rte_lcore_to_socket_id(lcore_id).)
>> >
>> > /* init one RX queue */
>> > fflush(stdout);
>> > ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
>> > rte_eth_dev_socket_id(portid),
>> > NULL,
>> > l2fwd_pktmbuf_pool);
>> > if (ret < 0)
>> > rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d,
>> > port=%u\n",
>> > ret, (unsigned) portid);
>> >
>> > It works fine even though memory is allocated in different NUMA node.
>> But I
>> > wonder there is
>> > a DPDK API that associates inlcore to port internally thus
>> > rte_eth_devices[portid].pci_dev->numa_node
>> > contains proper node.
>> >
>> >
>> > --
>> > Moon-Sang Lee, SW Engineer
>> > Email: sang0627@gmail.com
>> > Wisdom begins in wonder. *Socrates*
>>
>
>
>
> --
> Moon-Sang Lee, SW Engineer
> Email: sang0627@gmail.com
> Wisdom begins in wonder. *Socrates*
>
--
Moon-Sang Lee, SW Engineer
Email: sang0627@gmail.com
Wisdom begins in wonder. *Socrates*
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] [Q] l2fwd in examples directory
2015-10-19 7:39 ` Moon-Sang Lee
@ 2015-10-19 7:51 ` Moon-Sang Lee
2015-10-19 9:34 ` Bruce Richardson
1 sibling, 0 replies; 7+ messages in thread
From: Moon-Sang Lee @ 2015-10-19 7:51 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
Let me clarify my mixed stuffs.
My processor is L5520, family 6, model 26 that is based on Nehalem
microarchitecture
according to wikipedia (
https://en.wikipedia.org/wiki/Nehalem_(microarchitecture)),
it does not have PCI interface on chipset.
Therefore, "rte_eth_dev_socket_id(portid) always returns -1" seems no
problem.
My understanding of the lstopo result might be wrong.
Thanks anyway.
On Mon, Oct 19, 2015 at 4:39 PM, Moon-Sang Lee <sang0627@gmail.com> wrote:
>
> My NUT has Xeon L5520 that is based on Nehalem microarchitecture.
> Does Nehalem supports PCIe interface on chipset?
>
> Anyhow, 'lstopo' shows as below and it seems that my PCI devices are
> connected to socket #0.
> I'm still wondering why rte_eth_dev_socket_id(portid) always returns -1.
>
> mslee@myhost:~$ lstopo
> Machine (31GB)
> NUMANode L#0 (P#0 16GB) + Socket L#0 + L3 L#0 (8192KB)
> L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
> PU L#0 (P#0)
> PU L#1 (P#8)
> L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
> PU L#2 (P#2)
> PU L#3 (P#10)
> L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
> PU L#4 (P#4)
> PU L#5 (P#12)
> L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
> PU L#6 (P#6)
> PU L#7 (P#14)
> NUMANode L#1 (P#1 16GB) + Socket L#1 + L3 L#1 (8192KB)
> L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
> PU L#8 (P#1)
> PU L#9 (P#9)
> L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
> PU L#10 (P#3)
> PU L#11 (P#11)
> L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
> PU L#12 (P#5)
> PU L#13 (P#13)
> L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
> PU L#14 (P#7)
> PU L#15 (P#15)
> HostBridge L#0
> PCIBridge
> PCI 14e4:163b
> Net L#0 "em1"
> PCI 14e4:163b
> Net L#1 "em2"
> PCIBridge
> PCI 1000:0058
> Block L#2 "sda"
> Block L#3 "sdb"
> PCIBridge
> PCIBridge
> PCIBridge
> PCI 8086:10e8
> PCI 8086:10e8
> PCIBridge
> PCI 8086:10e8
> PCI 8086:10e8
> PCIBridge
> PCI 102b:0532
> PCI 8086:3a20
> PCI 8086:3a26
> Block L#4 "sr0"
> mslee@myhost:~$
>
>
>
> On Sun, Oct 18, 2015 at 2:51 PM, Moon-Sang Lee <sang0627@gmail.com> wrote:
>
>>
>> thanks bruce.
>>
>> I didn't know that PCI slots have direct socket affinity.
>> is it static or configurable through PCI configuration space?
>> well, my NUT, two node NUMA, seems always returns -1 on calling
>> rte_eth_dev_socket_id(portid) whenever portid is 0, 1, or other values.
>> I appreciate if you explain more about getting the affinity.
>>
>> p.s.
>> I'm using intel Xeon processor and 1G NIC(82576).
>>
>>
>>
>>
>> On Fri, Oct 16, 2015 at 10:43 PM, Bruce Richardson <
>> bruce.richardson@intel.com> wrote:
>>
>>> On Thu, Oct 15, 2015 at 11:08:57AM +0900, Moon-Sang Lee wrote:
>>> > There is codes as below in examples/l2fwd/main.c and I think
>>> > rte_eth_dev_socket_id(portid)
>>> > always returns -1(SOCKET_ID_ANY) since there is no association code
>>> between
>>> > port and
>>> > lcore in the example codes.
>>>
>>> Can you perhaps clarify what you mean here. On modern NUMA systems, such
>>> as those
>>> from Intel :-), the PCI slots are directly connected to the CPU sockets,
>>> so the
>>> ethernet ports do indeed have a direct NUMA affinity. It's not something
>>> that
>>> the app needs to specify.
>>>
>>> /Bruce
>>>
>>> > (i.e. I need to find a matching lcore from
>>> > lcore_queue_conf[] with portid
>>> > and call rte_lcore_to_socket_id(lcore_id).)
>>> >
>>> > /* init one RX queue */
>>> > fflush(stdout);
>>> > ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
>>> > rte_eth_dev_socket_id(portid),
>>> > NULL,
>>> > l2fwd_pktmbuf_pool);
>>> > if (ret < 0)
>>> > rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d,
>>> > port=%u\n",
>>> > ret, (unsigned) portid);
>>> >
>>> > It works fine even though memory is allocated in different NUMA node.
>>> But I
>>> > wonder there is
>>> > a DPDK API that associates inlcore to port internally thus
>>> > rte_eth_devices[portid].pci_dev->numa_node
>>> > contains proper node.
>>> >
>>> >
>>> > --
>>> > Moon-Sang Lee, SW Engineer
>>> > Email: sang0627@gmail.com
>>> > Wisdom begins in wonder. *Socrates*
>>>
>>
>>
>>
>> --
>> Moon-Sang Lee, SW Engineer
>> Email: sang0627@gmail.com
>> Wisdom begins in wonder. *Socrates*
>>
>
>
>
> --
> Moon-Sang Lee, SW Engineer
> Email: sang0627@gmail.com
> Wisdom begins in wonder. *Socrates*
>
--
Moon-Sang Lee, SW Engineer
Email: sang0627@gmail.com
Wisdom begins in wonder. *Socrates*
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] [Q] l2fwd in examples directory
2015-10-19 7:39 ` Moon-Sang Lee
2015-10-19 7:51 ` Moon-Sang Lee
@ 2015-10-19 9:34 ` Bruce Richardson
1 sibling, 0 replies; 7+ messages in thread
From: Bruce Richardson @ 2015-10-19 9:34 UTC (permalink / raw)
To: Moon-Sang Lee; +Cc: dev
On Mon, Oct 19, 2015 at 04:39:41PM +0900, Moon-Sang Lee wrote:
> My NUT has Xeon L5520 that is based on Nehalem microarchitecture.
> Does Nehalem supports PCIe interface on chipset?
For nehalem, I think having the PCI numa node reported as -1 is normal, as it's
not directly connected to the physical sockets, but connected to the chipset
instead. For later generation CPU Xeon platforms, your PCI slots are physically
going to be connected to one CPU socket or the other.
/Bruce
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2015-10-19 9:34 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-15 2:08 [dpdk-dev] [Q] l2fwd in examples directory Moon-Sang Lee
2015-10-15 2:08 ` Moon-Sang Lee
2015-10-16 13:43 ` Bruce Richardson
2015-10-18 5:51 ` Moon-Sang Lee
2015-10-19 7:39 ` Moon-Sang Lee
2015-10-19 7:51 ` Moon-Sang Lee
2015-10-19 9:34 ` Bruce Richardson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).