DPDK usage discussions
 help / color / mirror / Atom feed
* Non eal registered thread flow
@ 2023-11-29 22:21 Chris Ochs
  2023-11-29 22:50 ` Stephen Hemminger
  0 siblings, 1 reply; 3+ messages in thread
From: Chris Ochs @ 2023-11-29 22:21 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 1182 bytes --]

Trying to get a handle on the best way to integrate with my existing
architecture.

My main application is in Rust and it's a partitioned/batching flow. It's
an end server. I basically send type erased streams between partitions
using SPSC queues. Work scheduling is separate.  Workers basically do work
stealing of partitions.  The important part is messaging is tied to
partitions not threads.

So what I think might work best here is I assign a partition per lcore. I
already have a design where partitions can be designated as network
partitions, and my regular workers can then ignore these partitions.  With
dpdk specific workers taking over.  I designed the architecture for use
with user space networking generally from the start.

A partition in a networking flow consumes streams from other partitions
like normal. In a dpdk flow what I think this looks like is for each stream
call into C to transmit.  Streams would be written mbuf aligned so I think
this is just a single memcpy per stream into dpdk buffers.  And then a
single call to receive.

Does anything stand out here as problematic?  I read the known issues
section and nothing there stood out as  problematic.

[-- Attachment #2: Type: text/html, Size: 1336 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Non eal registered thread flow
  2023-11-29 22:21 Non eal registered thread flow Chris Ochs
@ 2023-11-29 22:50 ` Stephen Hemminger
  2023-11-29 23:35   ` Chris Ochs
  0 siblings, 1 reply; 3+ messages in thread
From: Stephen Hemminger @ 2023-11-29 22:50 UTC (permalink / raw)
  To: Chris Ochs; +Cc: users

On Wed, 29 Nov 2023 14:21:55 -0800
Chris Ochs <chris@ochsnet.com> wrote:

> Trying to get a handle on the best way to integrate with my existing
> architecture.
> 
> My main application is in Rust and it's a partitioned/batching flow. It's
> an end server. I basically send type erased streams between partitions
> using SPSC queues. Work scheduling is separate.  Workers basically do work
> stealing of partitions.  The important part is messaging is tied to
> partitions not threads.
> 
> So what I think might work best here is I assign a partition per lcore. I
> already have a design where partitions can be designated as network
> partitions, and my regular workers can then ignore these partitions.  With
> dpdk specific workers taking over.  I designed the architecture for use
> with user space networking generally from the start.
> 
> A partition in a networking flow consumes streams from other partitions
> like normal. In a dpdk flow what I think this looks like is for each stream
> call into C to transmit.  Streams would be written mbuf aligned so I think
> this is just a single memcpy per stream into dpdk buffers.  And then a
> single call to receive.
> 
> Does anything stand out here as problematic?  I read the known issues
> section and nothing there stood out as  problematic.

Are your lcore's pinned and isolated?
Is your API per packet or batch?
Are these DPDK ring buffers or some other queuing mechanism?



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Non eal registered thread flow
  2023-11-29 22:50 ` Stephen Hemminger
@ 2023-11-29 23:35   ` Chris Ochs
  0 siblings, 0 replies; 3+ messages in thread
From: Chris Ochs @ 2023-11-29 23:35 UTC (permalink / raw)
  To: stephen; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 1922 bytes --]

Lcores would be in pinned/isolated threads.

At the level data interacts with dpdk it's batched.  There can be multiple
batches.   Basically just multiple vectors of data already in
structured/aligned for what dpdk wants.

The SPSC queues are on the Rust side, not dpdk  provided.

On Wed, Nov 29, 2023 at 2:50 PM Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Wed, 29 Nov 2023 14:21:55 -0800
> Chris Ochs <chris@ochsnet.com> wrote:
>
> > Trying to get a handle on the best way to integrate with my existing
> > architecture.
> >
> > My main application is in Rust and it's a partitioned/batching flow. It's
> > an end server. I basically send type erased streams between partitions
> > using SPSC queues. Work scheduling is separate.  Workers basically do
> work
> > stealing of partitions.  The important part is messaging is tied to
> > partitions not threads.
> >
> > So what I think might work best here is I assign a partition per lcore. I
> > already have a design where partitions can be designated as network
> > partitions, and my regular workers can then ignore these partitions.
> With
> > dpdk specific workers taking over.  I designed the architecture for use
> > with user space networking generally from the start.
> >
> > A partition in a networking flow consumes streams from other partitions
> > like normal. In a dpdk flow what I think this looks like is for each
> stream
> > call into C to transmit.  Streams would be written mbuf aligned so I
> think
> > this is just a single memcpy per stream into dpdk buffers.  And then a
> > single call to receive.
> >
> > Does anything stand out here as problematic?  I read the known issues
> > section and nothing there stood out as  problematic.
>
> Are your lcore's pinned and isolated?
> Is your API per packet or batch?
> Are these DPDK ring buffers or some other queuing mechanism?
>
>
>

[-- Attachment #2: Type: text/html, Size: 2462 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-11-29 23:35 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-29 22:21 Non eal registered thread flow Chris Ochs
2023-11-29 22:50 ` Stephen Hemminger
2023-11-29 23:35   ` Chris Ochs

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).