Lcores would be in pinned/isolated threads.

At the level data interacts with dpdk it's batched.  There can be multiple batches.   Basically just multiple vectors of data already in structured/aligned for what dpdk wants. 

The SPSC queues are on the Rust side, not dpdk  provided.

On Wed, Nov 29, 2023 at 2:50 PM Stephen Hemminger <stephen@networkplumber.org> wrote:
On Wed, 29 Nov 2023 14:21:55 -0800
Chris Ochs <chris@ochsnet.com> wrote:

> Trying to get a handle on the best way to integrate with my existing
> architecture.
>
> My main application is in Rust and it's a partitioned/batching flow. It's
> an end server. I basically send type erased streams between partitions
> using SPSC queues. Work scheduling is separate.  Workers basically do work
> stealing of partitions.  The important part is messaging is tied to
> partitions not threads.
>
> So what I think might work best here is I assign a partition per lcore. I
> already have a design where partitions can be designated as network
> partitions, and my regular workers can then ignore these partitions.  With
> dpdk specific workers taking over.  I designed the architecture for use
> with user space networking generally from the start.
>
> A partition in a networking flow consumes streams from other partitions
> like normal. In a dpdk flow what I think this looks like is for each stream
> call into C to transmit.  Streams would be written mbuf aligned so I think
> this is just a single memcpy per stream into dpdk buffers.  And then a
> single call to receive.
>
> Does anything stand out here as problematic?  I read the known issues
> section and nothing there stood out as  problematic.

Are your lcore's pinned and isolated?
Is your API per packet or batch?
Are these DPDK ring buffers or some other queuing mechanism?