DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Ideal design to use ip-fragmentation using multiple lcores
@ 2018-10-31  1:55 Sungho Hong
  2018-10-31 15:18 ` Stephen Hemminger
  0 siblings, 1 reply; 3+ messages in thread
From: Sungho Hong @ 2018-10-31  1:55 UTC (permalink / raw)
  To: users

Hello DPDK experts,

I have a question of how to ideally use ip fragmentation and assemble using
N number of logical cores each associated with tx&rx queues.

Should the ip fragmentation table be single received by a single rx-queue?

Because I am not sure whether I will receive all the fragmented messages
correctly when I try to receive and combine the packets using multiple
rx-queues with multiple logical cores.

I have currently build an example that only uses one rx-queue with multiple
tx-queues and assemble the fragmented messages with a single frag-table.
But I am not sure how to scale this..

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Ideal design to use ip-fragmentation using multiple lcores
  2018-10-31  1:55 [dpdk-users] Ideal design to use ip-fragmentation using multiple lcores Sungho Hong
@ 2018-10-31 15:18 ` Stephen Hemminger
  2018-10-31 17:17   ` Sungho Hong
  0 siblings, 1 reply; 3+ messages in thread
From: Stephen Hemminger @ 2018-10-31 15:18 UTC (permalink / raw)
  To: Sungho Hong; +Cc: users

On Tue, 30 Oct 2018 18:55:02 -0700
Sungho Hong <maverickjin88@gmail.com> wrote:

> Hello DPDK experts,
> 
> I have a question of how to ideally use ip fragmentation and assemble using
> N number of logical cores each associated with tx&rx queues.
> 
> Should the ip fragmentation table be single received by a single rx-queue?
> 
> Because I am not sure whether I will receive all the fragmented messages
> correctly when I try to receive and combine the packets using multiple
> rx-queues with multiple logical cores.
> 
> I have currently build an example that only uses one rx-queue with multiple
> tx-queues and assemble the fragmented messages with a single frag-table.
> But I am not sure how to scale this..

I am not sure what you are asking.

The usual model of DPDK programs is to use RSS to spread receive packets
across multiple RX queues, and use a thread per RX queue to poll.

When IP packets are fragmented the IP header is on each packet and the
UDP header is only on the first packet. RSS can be configured to include
the UDP port (or not). If RSS is applied to the UDP port as well (L3+L4)
then the fragmented packets will arrive on potentially different queues.
If you configure RSS for L3 only hashing, then the fragmented packet
will arrive on the same queue for all fragments. The downside of L3 only
hashing is that if you are doing a workload or benchmark with only
a single address pair, then all packets will be on one queue.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Ideal design to use ip-fragmentation using multiple lcores
  2018-10-31 15:18 ` Stephen Hemminger
@ 2018-10-31 17:17   ` Sungho Hong
  0 siblings, 0 replies; 3+ messages in thread
From: Sungho Hong @ 2018-10-31 17:17 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: users

Thank you very much for the reply, this cleared things up.

So in that case, I need a single frag-table correct?
so If I want to receive using multiple rx-queues, the fragmented data can
arrive at different queues, which means I cannot have multiple frag-table.

And does this imply that I have to lock the frag-table so that multiple
processes will not affect the consistency?

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-10-31 17:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-31  1:55 [dpdk-users] Ideal design to use ip-fragmentation using multiple lcores Sungho Hong
2018-10-31 15:18 ` Stephen Hemminger
2018-10-31 17:17   ` Sungho Hong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).