DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] one worker reading multiple ports
@ 2014-11-20  8:33 Newman Poborsky
  2014-11-20  8:56 ` De Lara Guarch, Pablo
  0 siblings, 1 reply; 7+ messages in thread
From: Newman Poborsky @ 2014-11-20  8:33 UTC (permalink / raw)
  To: dev

Hi,

is it possible to use one worker thread (one lcore) to read packets from
multiple ports?

When I start 2 workers and assign each one  to read from different ports
(with  rte_eth_rx_burst()) everything works fine, but if I assign one
worker to read packets from 2 ports, rte_eth_rx_burst() returns 0 as if no
packets are read.

Is there any reason for this kind of behaviour?

Thanks!

Br,
Newman P.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] one worker reading multiple ports
  2014-11-20  8:33 [dpdk-dev] one worker reading multiple ports Newman Poborsky
@ 2014-11-20  8:56 ` De Lara Guarch, Pablo
  2014-11-20 16:10   ` Newman Poborsky
  0 siblings, 1 reply; 7+ messages in thread
From: De Lara Guarch, Pablo @ 2014-11-20  8:56 UTC (permalink / raw)
  To: Newman Poborsky; +Cc: dev

Hi Newman,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Newman Poborsky
> Sent: Thursday, November 20, 2014 8:34 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] one worker reading multiple ports
> 
> Hi,
> 
> is it possible to use one worker thread (one lcore) to read packets from
> multiple ports?
> 
> When I start 2 workers and assign each one  to read from different ports
> (with  rte_eth_rx_burst()) everything works fine, but if I assign one
> worker to read packets from 2 ports, rte_eth_rx_burst() returns 0 as if no
> packets are read.

Yes, it is totally possible. The only problem would be if you try to use multiple threads 
to read/write on one port, in which case you should use multiple queues.
Look at l3fwd app for instance. You can use just a single core to handle packets on multiple ports.

Pablo
> 
> Is there any reason for this kind of behaviour?
> 
> Thanks!
> 
> Br,
> Newman P.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] one worker reading multiple ports
  2014-11-20  8:56 ` De Lara Guarch, Pablo
@ 2014-11-20 16:10   ` Newman Poborsky
  2014-11-20 21:52     ` Matthew Hall
  0 siblings, 1 reply; 7+ messages in thread
From: Newman Poborsky @ 2014-11-20 16:10 UTC (permalink / raw)
  To: De Lara Guarch, Pablo; +Cc: dev

Thank you for your answer.

I just realized that the reason the rte_eth_rx_burst() returns 0 is because
inside ixgbe_recv_pkts() this fails:
nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL

Does this mean that every RX queue should have its own rte_mempool?  If so,
are there any optimal values for: number of RX descriptors, per-queue
rte_mempool size, number of hugepages (from what I understand, these 3 are
correlated)?

If I'm wrong, please explain why.

Thanks!

BR,
Newman

On Thu, Nov 20, 2014 at 9:56 AM, De Lara Guarch, Pablo <
pablo.de.lara.guarch@intel.com> wrote:

> Hi Newman,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Newman Poborsky
> > Sent: Thursday, November 20, 2014 8:34 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] one worker reading multiple ports
> >
> > Hi,
> >
> > is it possible to use one worker thread (one lcore) to read packets from
> > multiple ports?
> >
> > When I start 2 workers and assign each one  to read from different ports
> > (with  rte_eth_rx_burst()) everything works fine, but if I assign one
> > worker to read packets from 2 ports, rte_eth_rx_burst() returns 0 as if
> no
> > packets are read.
>
> Yes, it is totally possible. The only problem would be if you try to use
> multiple threads
> to read/write on one port, in which case you should use multiple queues.
> Look at l3fwd app for instance. You can use just a single core to handle
> packets on multiple ports.
>
> Pablo
> >
> > Is there any reason for this kind of behaviour?
> >
> > Thanks!
> >
> > Br,
> > Newman P.
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] one worker reading multiple ports
  2014-11-20 16:10   ` Newman Poborsky
@ 2014-11-20 21:52     ` Matthew Hall
  2014-11-21 14:03       ` Newman Poborsky
  0 siblings, 1 reply; 7+ messages in thread
From: Matthew Hall @ 2014-11-20 21:52 UTC (permalink / raw)
  To: Newman Poborsky; +Cc: dev

On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> Thank you for your answer.
> 
> I just realized that the reason the rte_eth_rx_burst() returns 0 is because
> inside ixgbe_recv_pkts() this fails:
> nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> 
> Does this mean that every RX queue should have its own rte_mempool?  If so,
> are there any optimal values for: number of RX descriptors, per-queue
> rte_mempool size, number of hugepages (from what I understand, these 3 are
> correlated)?
> 
> If I'm wrong, please explain why.
> 
> Thanks!
> 
> BR,
> Newman

Newman,

Mempools are created per NUMA node (ordinarily this means per processor socket 
if sockets > 1).

When doing Tx / Rx Queue Setup, one should determine the socket which owns the 
given PCI NIC, and try to use memory on that same socket to handle traffic for 
that NIC and Queues.

So, for N cards with Q * N Tx / Rx queues, you only need S mempools.

Then each of the Q * N queues will use the mempool from the socket closest to 
the card.

Matthew.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] one worker reading multiple ports
  2014-11-20 21:52     ` Matthew Hall
@ 2014-11-21 14:03       ` Newman Poborsky
  2014-11-21 14:44         ` Bruce Richardson
  0 siblings, 1 reply; 7+ messages in thread
From: Newman Poborsky @ 2014-11-21 14:03 UTC (permalink / raw)
  To: Matthew Hall; +Cc: dev

So, since mempool is multi-consumer (by default), if one is used to
configure queues on multiple NICs that have different socket owners, then
mbuf allocation will fail? But if 2 NICs have the socket owner, everything
should work fine?  Since I'm talking about 2 ports on the same NIC, they
must have the same owner, RX receive should work with RX queues configured
with the same mempool, right? But it my case it doesn't so I guess I'm
missing something.

Any idea how can I troubleshoot why allocation fails with one mempool and
works fine with each queue having its own mempool?

Thank you,

Newman

On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall <mhall@mhcomputing.net>
wrote:

> On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > Thank you for your answer.
> >
> > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> because
> > inside ixgbe_recv_pkts() this fails:
> > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> >
> > Does this mean that every RX queue should have its own rte_mempool?  If
> so,
> > are there any optimal values for: number of RX descriptors, per-queue
> > rte_mempool size, number of hugepages (from what I understand, these 3
> are
> > correlated)?
> >
> > If I'm wrong, please explain why.
> >
> > Thanks!
> >
> > BR,
> > Newman
>
> Newman,
>
> Mempools are created per NUMA node (ordinarily this means per processor
> socket
> if sockets > 1).
>
> When doing Tx / Rx Queue Setup, one should determine the socket which owns
> the
> given PCI NIC, and try to use memory on that same socket to handle traffic
> for
> that NIC and Queues.
>
> So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
>
> Then each of the Q * N queues will use the mempool from the socket closest
> to
> the card.
>
> Matthew.
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] one worker reading multiple ports
  2014-11-21 14:03       ` Newman Poborsky
@ 2014-11-21 14:44         ` Bruce Richardson
  2014-11-21 22:55           ` Newman Poborsky
  0 siblings, 1 reply; 7+ messages in thread
From: Bruce Richardson @ 2014-11-21 14:44 UTC (permalink / raw)
  To: Newman Poborsky; +Cc: dev

On Fri, Nov 21, 2014 at 03:03:25PM +0100, Newman Poborsky wrote:
> So, since mempool is multi-consumer (by default), if one is used to
> configure queues on multiple NICs that have different socket owners, then
> mbuf allocation will fail? But if 2 NICs have the socket owner, everything
> should work fine?  Since I'm talking about 2 ports on the same NIC, they
> must have the same owner, RX receive should work with RX queues configured
> with the same mempool, right? But it my case it doesn't so I guess I'm
> missing something.

Actually, the mempools will work with NICs on multiple sockets - it's just
that performance is likely to suffer due to QPI usage. The mempools being on
one socket or the other is not going to break your application.

> 
> Any idea how can I troubleshoot why allocation fails with one mempool and
> works fine with each queue having its own mempool?

At a guess, I'd say that your mempools just aren't bit enough. Try doubling the
size of th mempool in the single-pool case and see if it helps things.

/Bruce

> 
> Thank you,
> 
> Newman
> 
> On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall <mhall@mhcomputing.net>
> wrote:
> 
> > On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > > Thank you for your answer.
> > >
> > > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> > because
> > > inside ixgbe_recv_pkts() this fails:
> > > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> > >
> > > Does this mean that every RX queue should have its own rte_mempool?  If
> > so,
> > > are there any optimal values for: number of RX descriptors, per-queue
> > > rte_mempool size, number of hugepages (from what I understand, these 3
> > are
> > > correlated)?
> > >
> > > If I'm wrong, please explain why.
> > >
> > > Thanks!
> > >
> > > BR,
> > > Newman
> >
> > Newman,
> >
> > Mempools are created per NUMA node (ordinarily this means per processor
> > socket
> > if sockets > 1).
> >
> > When doing Tx / Rx Queue Setup, one should determine the socket which owns
> > the
> > given PCI NIC, and try to use memory on that same socket to handle traffic
> > for
> > that NIC and Queues.
> >
> > So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
> >
> > Then each of the Q * N queues will use the mempool from the socket closest
> > to
> > the card.
> >
> > Matthew.
> >

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] one worker reading multiple ports
  2014-11-21 14:44         ` Bruce Richardson
@ 2014-11-21 22:55           ` Newman Poborsky
  0 siblings, 0 replies; 7+ messages in thread
From: Newman Poborsky @ 2014-11-21 22:55 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

Nice guess :)  After adding check with rte_mempool_empty(), as soon as I
enable second port for reading, it shows that the mempool is empty. Thank
you for help!

On Fri, Nov 21, 2014 at 3:44 PM, Bruce Richardson <
bruce.richardson@intel.com> wrote:

> On Fri, Nov 21, 2014 at 03:03:25PM +0100, Newman Poborsky wrote:
> > So, since mempool is multi-consumer (by default), if one is used to
> > configure queues on multiple NICs that have different socket owners, then
> > mbuf allocation will fail? But if 2 NICs have the socket owner,
> everything
> > should work fine?  Since I'm talking about 2 ports on the same NIC, they
> > must have the same owner, RX receive should work with RX queues
> configured
> > with the same mempool, right? But it my case it doesn't so I guess I'm
> > missing something.
>
> Actually, the mempools will work with NICs on multiple sockets - it's just
> that performance is likely to suffer due to QPI usage. The mempools being
> on
> one socket or the other is not going to break your application.
>
> >
> > Any idea how can I troubleshoot why allocation fails with one mempool and
> > works fine with each queue having its own mempool?
>
> At a guess, I'd say that your mempools just aren't bit enough. Try
> doubling the
> size of th mempool in the single-pool case and see if it helps things.
>
> /Bruce
>
> >
> > Thank you,
> >
> > Newman
> >
> > On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall <mhall@mhcomputing.net>
> > wrote:
> >
> > > On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > > > Thank you for your answer.
> > > >
> > > > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> > > because
> > > > inside ixgbe_recv_pkts() this fails:
> > > > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> > > >
> > > > Does this mean that every RX queue should have its own rte_mempool?
> If
> > > so,
> > > > are there any optimal values for: number of RX descriptors, per-queue
> > > > rte_mempool size, number of hugepages (from what I understand, these
> 3
> > > are
> > > > correlated)?
> > > >
> > > > If I'm wrong, please explain why.
> > > >
> > > > Thanks!
> > > >
> > > > BR,
> > > > Newman
> > >
> > > Newman,
> > >
> > > Mempools are created per NUMA node (ordinarily this means per processor
> > > socket
> > > if sockets > 1).
> > >
> > > When doing Tx / Rx Queue Setup, one should determine the socket which
> owns
> > > the
> > > given PCI NIC, and try to use memory on that same socket to handle
> traffic
> > > for
> > > that NIC and Queues.
> > >
> > > So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
> > >
> > > Then each of the Q * N queues will use the mempool from the socket
> closest
> > > to
> > > the card.
> > >
> > > Matthew.
> > >
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-11-21 22:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-20  8:33 [dpdk-dev] one worker reading multiple ports Newman Poborsky
2014-11-20  8:56 ` De Lara Guarch, Pablo
2014-11-20 16:10   ` Newman Poborsky
2014-11-20 21:52     ` Matthew Hall
2014-11-21 14:03       ` Newman Poborsky
2014-11-21 14:44         ` Bruce Richardson
2014-11-21 22:55           ` Newman Poborsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).