From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F51D43408 for ; Thu, 30 Nov 2023 00:35:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E2AF2402C8; Thu, 30 Nov 2023 00:35:25 +0100 (CET) Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) by mails.dpdk.org (Postfix) with ESMTP id 8E6AA40297 for ; Thu, 30 Nov 2023 00:35:24 +0100 (CET) Received: by mail-ej1-f42.google.com with SMTP id a640c23a62f3a-a185fb3aa18so35198366b.1 for ; Wed, 29 Nov 2023 15:35:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ochsnet-com.20230601.gappssmtp.com; s=20230601; t=1701300924; x=1701905724; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=YzxLIlMcrJ9aTpRz9KwoYLE/S1e6+JlPORrUGlP79Cs=; b=OomHYeccGo3pVxSvhAVLUHN90Hxhwvkc3A93TXFe+k5mqrXeXXB9qKC14eofpOyXuO 1G//OmAWDSsaSU4BW/08FZ5vwopE0EXxWy9DP2ITL7zGddhU5UGACeiKftgn1gS73M6B xkVSpobj9up5vcwXrJEB93Y2nUKT0t7gO1ra0TP6g2G4p9K83fYyiyslXAvEB0kzsOeE 077Jg7ggN7Z7MDarYXszBvJQewozeEkc1qY7FZCztFyMm7McAB70WifgqJmfhIvC7NsZ Do5CQwtXnvkhAvUUXmYFnoi+HWyUAIG6FRCB7imEl9oyieYc5AhYXWWSbaO+NALQ1+3n LOLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701300924; x=1701905724; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=YzxLIlMcrJ9aTpRz9KwoYLE/S1e6+JlPORrUGlP79Cs=; b=kN8rkzPZa5rH4iTpEdqEDXAW6spTH3Qwip8oT6Mr7PlFEd3Nu94ilXORdJ4ZV9TYw7 5uEz0sYz0WqjPCWFrnExF0kGrDeIl4ilkK+QOsMMBD3nL6TC2QWVN6OYzDug2JE8FuN8 zE1aUSDdE/PWoZr01baeZGMKo7k+lIoREwDyOzJBfIKs0HvNZdwi9GT58g/no7KZcqdQ +/jLn1rBAE5LmHJzYrLVZCm/Y/NcfPJXEwdYSVC9Jdmw2Kxy0TipyjxiiwketIF70f+M CXOmtUGN0Nx+5V3NNjic4RLaHRqC7W2poBSayZfphUnVedeq6hNM1Xl6yW120UddZrTK 3E6A== X-Gm-Message-State: AOJu0YwD7Am+IRM+XjOOxY5oE/3wYeleI45EGqcH2TZRnQYpURs4wayZ RHKobWo4oExyYjMPA7OcwGqgbohYXyDufIt7Uvec6KKjupIsUrOW X-Google-Smtp-Source: AGHT+IESjLvGvudAFqskZokxg8YjUN8gX8FWpCpsNMjh3IjrQmF5oDGfmdRup6QcpjxNPHJ49BCuYcJuPZWMFSJ2Yb4= X-Received: by 2002:a17:906:749e:b0:a18:869e:21e7 with SMTP id e30-20020a170906749e00b00a18869e21e7mr106818ejl.1.1701300924033; Wed, 29 Nov 2023 15:35:24 -0800 (PST) MIME-Version: 1.0 References: <20231129145055.4c25d5e3@hermes.local> In-Reply-To: <20231129145055.4c25d5e3@hermes.local> From: Chris Ochs Date: Wed, 29 Nov 2023 15:35:10 -0800 Message-ID: Subject: Re: Non eal registered thread flow To: stephen@networkplumber.org Cc: users@dpdk.org Content-Type: multipart/alternative; boundary="0000000000003bbdfb060b52fd3a" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --0000000000003bbdfb060b52fd3a Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Lcores would be in pinned/isolated threads. At the level data interacts with dpdk it's batched. There can be multiple batches. Basically just multiple vectors of data already in structured/aligned for what dpdk wants. The SPSC queues are on the Rust side, not dpdk provided. On Wed, Nov 29, 2023 at 2:50=E2=80=AFPM Stephen Hemminger < stephen@networkplumber.org> wrote: > On Wed, 29 Nov 2023 14:21:55 -0800 > Chris Ochs wrote: > > > Trying to get a handle on the best way to integrate with my existing > > architecture. > > > > My main application is in Rust and it's a partitioned/batching flow. It= 's > > an end server. I basically send type erased streams between partitions > > using SPSC queues. Work scheduling is separate. Workers basically do > work > > stealing of partitions. The important part is messaging is tied to > > partitions not threads. > > > > So what I think might work best here is I assign a partition per lcore.= I > > already have a design where partitions can be designated as network > > partitions, and my regular workers can then ignore these partitions. > With > > dpdk specific workers taking over. I designed the architecture for use > > with user space networking generally from the start. > > > > A partition in a networking flow consumes streams from other partitions > > like normal. In a dpdk flow what I think this looks like is for each > stream > > call into C to transmit. Streams would be written mbuf aligned so I > think > > this is just a single memcpy per stream into dpdk buffers. And then a > > single call to receive. > > > > Does anything stand out here as problematic? I read the known issues > > section and nothing there stood out as problematic. > > Are your lcore's pinned and isolated? > Is your API per packet or batch? > Are these DPDK ring buffers or some other queuing mechanism? > > > --0000000000003bbdfb060b52fd3a Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Lcores would be in pinned/isolated threads.

=
At the level data interacts with dpdk it's batched.=C2=A0 There ca= n be multiple batches.=C2=A0 =C2=A0Basically just multiple vectors of data = already in structured/aligned for what dpdk wants.=C2=A0

The SPSC queues are on the Rust side, not dpdk=C2=A0 provided.
=

= On Wed, Nov 29, 2023 at 2:50=E2=80=AFPM Stephen Hemminger <stephen@networkplumber.org> wrote:<= br>
On Wed, 29 Nov 2= 023 14:21:55 -0800
Chris Ochs <chris= @ochsnet.com> wrote:

> Trying to get a handle on the best way to integrate with my existing > architecture.
>
> My main application is in Rust and it's a partitioned/batching flo= w. It's
> an end server. I basically send type erased streams between partitions=
> using SPSC queues. Work scheduling is separate.=C2=A0 Workers basicall= y do work
> stealing of partitions.=C2=A0 The important part is messaging is tied = to
> partitions not threads.
>
> So what I think might work best here is I assign a partition per lcore= . I
> already have a design where partitions can be designated as network > partitions, and my regular workers can then ignore these partitions.= =C2=A0 With
> dpdk specific workers taking over.=C2=A0 I designed the architecture f= or use
> with user space networking generally from the start.
>
> A partition in a networking flow consumes streams from other partition= s
> like normal. In a dpdk flow what I think this looks like is for each s= tream
> call into C to transmit.=C2=A0 Streams would be written mbuf aligned s= o I think
> this is just a single memcpy per stream into dpdk buffers.=C2=A0 And t= hen a
> single call to receive.
>
> Does anything stand out here as problematic?=C2=A0 I read the known is= sues
> section and nothing there stood out as=C2=A0 problematic.

Are your lcore's pinned and isolated?
Is your API per packet or batch?
Are these DPDK ring buffers or some other queuing mechanism?


--0000000000003bbdfb060b52fd3a--