DPDK usage discussions
 help / color / mirror / Atom feed
From: Chris Ochs <chris@ochsnet.com>
To: stephen@networkplumber.org
Cc: users@dpdk.org
Subject: Re: Non eal registered thread flow
Date: Wed, 29 Nov 2023 15:35:10 -0800	[thread overview]
Message-ID: <CABJreou673jg3aYK1XtC=b8v9GpwYwAqr02XjiJZCEpkh3pPWw@mail.gmail.com> (raw)
In-Reply-To: <20231129145055.4c25d5e3@hermes.local>

[-- Attachment #1: Type: text/plain, Size: 1922 bytes --]

Lcores would be in pinned/isolated threads.

At the level data interacts with dpdk it's batched.  There can be multiple
batches.   Basically just multiple vectors of data already in
structured/aligned for what dpdk wants.

The SPSC queues are on the Rust side, not dpdk  provided.

On Wed, Nov 29, 2023 at 2:50 PM Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Wed, 29 Nov 2023 14:21:55 -0800
> Chris Ochs <chris@ochsnet.com> wrote:
>
> > Trying to get a handle on the best way to integrate with my existing
> > architecture.
> >
> > My main application is in Rust and it's a partitioned/batching flow. It's
> > an end server. I basically send type erased streams between partitions
> > using SPSC queues. Work scheduling is separate.  Workers basically do
> work
> > stealing of partitions.  The important part is messaging is tied to
> > partitions not threads.
> >
> > So what I think might work best here is I assign a partition per lcore. I
> > already have a design where partitions can be designated as network
> > partitions, and my regular workers can then ignore these partitions.
> With
> > dpdk specific workers taking over.  I designed the architecture for use
> > with user space networking generally from the start.
> >
> > A partition in a networking flow consumes streams from other partitions
> > like normal. In a dpdk flow what I think this looks like is for each
> stream
> > call into C to transmit.  Streams would be written mbuf aligned so I
> think
> > this is just a single memcpy per stream into dpdk buffers.  And then a
> > single call to receive.
> >
> > Does anything stand out here as problematic?  I read the known issues
> > section and nothing there stood out as  problematic.
>
> Are your lcore's pinned and isolated?
> Is your API per packet or batch?
> Are these DPDK ring buffers or some other queuing mechanism?
>
>
>

[-- Attachment #2: Type: text/html, Size: 2462 bytes --]

      reply	other threads:[~2023-11-29 23:35 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-29 22:21 Chris Ochs
2023-11-29 22:50 ` Stephen Hemminger
2023-11-29 23:35   ` Chris Ochs [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABJreou673jg3aYK1XtC=b8v9GpwYwAqr02XjiJZCEpkh3pPWw@mail.gmail.com' \
    --to=chris@ochsnet.com \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).