From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
To: Owen Hilyard <Owen.Hilyard@unh.edu>,
"Etelson, Gregory" <getelson@nvidia.com>,
"Richardson, Bruce" <bruce.richardson@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [PATCH] rust: RFC/demo of safe API for Dpdk Eal, Eth and Rxq
Date: Fri, 9 May 2025 16:24:38 +0000 [thread overview]
Message-ID: <PH8PR11MB6803FD2723BE0ECE7DBB44C8D78AA@PH8PR11MB6803.namprd11.prod.outlook.com> (raw)
In-Reply-To: <DM8P223MB038399660EAB15CCEC365A868D8BA@DM8P223MB0383.NAMP223.PROD.OUTLOOK.COM>
> From: Owen Hilyard
> Sent: Friday, May 09, 2025 12:53 AM
> To: Van Haaren, Harry; Etelson, Gregory; Richardson, Bruce
> Cc: dev@dpdk.org
> Subject: Re: [PATCH] rust: RFC/demo of safe API for Dpdk Eal, Eth and Rxq
>
> > From: Van Haaren, Harry <harry.van.haaren@intel.com>
> > Sent: Tuesday, May 6, 2025 12:39 PM
> > To: Owen Hilyard <Owen.Hilyard@unh.edu>; Etelson, Gregory <getelson@nvidia.com>; Richardson, Bruce <bruce.richardson@intel.com>
> > Cc: dev@dpdk.org <dev@dpdk.org>
> > Subject: Re: [PATCH] rust: RFC/demo of safe API for Dpdk Eal, Eth and Rxq
<snip>
> > Hi All!
> >
> > Great to see passionate & detailed replies & input!
> >
> > Please folks - lets try remember to send plain-text emails, and use > to indent each reply.
> >Its hard to identify what I wrote (1) compared to Owen's replies (2) in the archives otherwise.
> > (Adding some "Harry wrote" and "Owen wrote" annotations to try help future readability.)
>
> My apologies, I'll be more careful with that.
Thanks! The reply here is perfect.
> > Maybe it will help to split the conversation into two threads, with one focussing on
> "DPDK used through Safe Rust abstractions", and the other on "future cool use-cases".
>
> Agree.
>
> > Perhaps I jumped a bit too far ahead mentioning async runtimes, and while I like the enthusiasm for designing "cool new stuff", it is probably better to be realistic around what will get "done": my bad.
> >
> > I'll reply to the "DPDK via Safe Rust" topics below, and start a new thread (with same folks on CC) for "future cool use-cases" when I've had a chance to clean up a little demo to showcase them.
> >
> >
> > > > > Thanks for sharing. However, IMHO using EAL for thread management in rust
> > > > > is the wrong interface to expose.
> > > >
> > > > EAL is a singleton object in DPDK architecture.
> > > > I see it as a hub for other resources.
> >
> > Harry Wrote:
> > > Yep, i tend to agree here; EAL is central to the rest of DPDK working correctly.
> > > And given EALs implementation is heavily relying on global static variables, it is
> > > certainly a "singleton" instance, yes.
> >
> > Owen wrote:
> > > I think a singleton one way to implement this, but then you lose some of the RAII/automatic resource management behavior. It would, however, make some APIs inherently unsafe or very unergonomic unless we were to force rte_eal_cleanup to be run via atexit(3) or the platform equivalent and forbid the user from running it themselves. For a lot of Rust runtimes similar to the EAL (tokio, glommio, etc), once you spawn a runtime it's around until process exit. The other option is to have a handle which represents the state of the EAL on the Rust side and runs rte_eal_init on creation and rte_eal_cleanup on destruction. There are two ways we can make that safe. First, reference counting, once the handles are created, they can be passed around easily, and the last one runs rte_eal_cleanup when it gets dropped. This avoids having tons of complicated lifetimes and I think that, everywhere that it shouldn't affect fast path performance, we should use refcounting.
> >
> > Agreed, refcounts for EAL "singleton" concept yes. For the record, the initial patch actually returns a
> "dpdk" object from dpdk::Eal::init(), and Drop impl has a // TODO rte_eal_cleanup(), so well aligned on approach here.
> > https://patches.dpdk.org/project/dpdk/patch/20250418132324.4085336-1-harry.van.haaren@intel.com/
>
> One thing I think I'd like to see is using a "newtype" for important numbers (ex: "struct EthDevQueueId(pub u16)"). This prevents some classes of error but if we make the constructor public it's at most a minor inconvenience to anyone who has to do something a bit odd.
>
> > > Owen wrote:
> > > The other option is to use lifetimes. This is doable, but is going to force people who are more likely to primarily be C or C++ developers to dive deep into Rust's type system if they want to build abstractions over it. If we add async into the mix, as many people are going to want to do, it's going to become much, much harder. As a result, I'd advocate for only using it for data path components where refcounting isn't an option.
> >
> > +1 to not using lifetimes here, it is not the right solution for this EAL / singleton type problem.
>
> Having now looked over the initial patchset in more detail, I think we do have a question of how far down "it compiles it works" we want to go. For example, using typestates to make Eal::take_eth_ports impossible to call more than once using something like this:
>
> #[derive(Debug, Default)]
> pub struct Eal<const HAS_ETHDEV_PORTS: bool> {
> eth_ports: Vec<eth::Port>,
> }
>
> impl<const HAS_ETHDEV_PORTS: bool> Eal<HAS_ETHDEV_PORTS> {
> pub fn init() -> Result<Self, String> {
> // EAL init() will do PCI probe and VDev enumeration will find/create eth ports.
> // This code should loop over the ports, and build up Rust structs representing them
> let eth_port = vec![eth::Port::from_u16(0)];
> Ok(Eal {
> eth_ports: Some(eth_port),
> })
> }
> }
>
> impl Eal<true> {
> pub fn take_eth_ports(self) -> (Eal<false>, Vec<eth::Port>) {
> (Eal::<false>::default(), self.eth_ports.take())
> }
> }
>
> impl<const HAS_ETHDEV_PORTS: bool> Drop for Eal<HAS_ETHDEV_PORTS> {
> fn drop(&mut self) {
> if HAS_ETHDEV_PORTS {
> // extra desired port cleanup
> }
> // todo: rte_eal_cleanup()
> }
> }
>
> This does add some noise to looking at the struct, but also lets the compiler enforce what state a struct should be in to call a given function. Taken to its logical extreme, we could create an API where many of the "resource in wrong state" errors should be impossible. However, it also requires more knowledge of Rust's type system on the part of the people making the API and can be a bit harder to understand without an LSP helping you along.
This is too much in my opinion. I know there's value, but the ergonomics suffers significantly if we have generics over Eal.
I'd like to not treat Ethdev "differently" to other Devs. And if we give Ethdev a generic for EAL, then the others would too; exploding the generic counts & complixity.
Techie notes for eager readers; one can use this technique for compile-time enforing lock-ordering (avoiding ABA deadlock)!
Thanks to Fuchsia OS, and Joshua Liebow-Feeser https://lwn.net/Articles/995814/,
and Angus Morrison for the simpler demo at https://docs.rs/lock_tree/latest/lock_tree/
So this technique is really cool, but not the right tradeoff in this case.
<snip>
> > The key point above is "except where runtimes force them to mix". The DPDK rxq concept (struct Rxq in the code linked above) is !Send.
> > As a result, it cannot be moved between threads. That allows per-lcore concepts to be used for performance.
>
> The problem is that, with Tokio, it also can't be held across an await point. I agree that !Send is correct, but the existence of !Send resources means that integration with Tokio is much, much harder. For PMDs with RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, TX is fine, but as far as I am aware there is no equivalent for RX. And, to safely take advantage of the TX version, we'd need to know the capabilities of the target PMD at compile time, which is part of why my own bindings "devirtualize" the EAL and require a top-level function which dispatches based on the capabilities provided by the PMDs I make use of. Glommio was easily able to integrate safely (theoretically Monoio would be too, although I haven't used it), but I haven't found a safe way to mix Tokio and queue handles which doesn't make it nearly impossible to use async, even when taking that fairly extreme measure.
>
> > The point I was trying to make is that we (the DPDK safe rust wrapper API) should not be prescriptive in how it is used.
> > In other words: we should allow the user to decide how to spawn/manage/run threads.
> >
> > We must encode the DPDK requirements of e.g. "Rxq concept" with !Send, !Sync marker traits.
> > Then the Rust compiler will at compile-time ensure the users code is correct.
>
> I agree that !Send and !Sync are likely correct for Rxqs, however, we also need to be very careful in documenting the WHY of !Send and !Sync in each context. For instance, how are we going to get the queue handles to the threads which run the data path if we get all of them from an Eal struct in a Vec on the main thread? We may need to have a way to "deactivate" them so the user can't use them for queue operations but they are Send, !Sync, emit a fence, and then when the user "activates" them it performs another fence to force anything the last thread did with the queue to be visible on the new core. I suspect we'll need to apply a similar pattern for other thread unsafe parts of DPDK in order to get them to where they need to be during execution.
Look at the patch, the difference between a RxqHandle and Rxq encodes exactly what you're asking.
Gregory renamed the "change" function to .activate(), but the fundamental "consume struct and give back !Send pollable Rxq" is the same.
Agree we need things documented, but the C API docs should have that already, see the Rxq example as explained at Userspace: https://youtu.be/lb6xn2xQ-NQ?t=890.
> > I don't believe that I can identify all use-cases, so we cannot design requirements around statements like "I think X is more likely than Y".
>
> I agree, this is why unsafe escape hatches will be necessary. Someone will have some weird edge-case like a CPU with no cache that makes it fine to move Rxqs around with abandon.
No need for unsafe, just not be prescriptive in how threading "should work", just be flexible and allow the user to decide.
All the proposed DPDK-rs does is provides safe Rust structs that encode the correct Send/Sync requirements, nothing more.
After that, any user can correctly use our APIs, and if it compiles, then its correct (from a threading POV).
Even users with "weird edge-cases like a CPU with no cache" will still work correctly.
> > Harry wrote:
> > > Lets focus on Tokio first: it is an "async runtime" (two links for future readers)
> > > <snip>
> > > So an async runtime can run "async" Rust functions (called Futures, or Tasks when run independently..)
> > > There are lots of words/concepts, but I'll focus only on the thread creation/control aspect, given the DPDK EAL lcore context.
> > >
> > > Tokio is a work-stealing scheduler. It spawns "worker" threads, and then gives these "tasks"
> > > to various worker cores (similar to how Golang does its work-stealing scheduling). Some
> > > DPDK crate users might like this type of workflow, where e.g. RXQ polling is a task, and the
> > > "tokio runtime" figures out which worker to run it on. "Spawning" a task causes the "Future"
> > > to start executing. (technical Rust note: notice the "Send" bound on Future: https://docs.rs/tokio/latest/tokio/task/fn.spawn.html )
> > > The work stealing aspect of Tokio has also led to some issues in the Rust ecosystem. What it effectively means is that every "await" is a place where you might get moved to another thread. This means that it would be unsound to, for example, have a queue handle on devices without MT-safe queues unless we want to put a mutex on top of all of the device queues. I personally think this is a lot of the source of people thinking that Rust async is hard, because Tokio forces you to be thread safe at really weird places in your code and has issues like not being able to hold a mutex over an await point.
> > >
> > > Other users might prefer the "thread-per-core" and CPU pinning approach (like DPDK itself would do).
> > > nit: Tokio also spawns a thread per core, it just freely moves tasks between cores. It doesn't pin because it's designed to interoperate with the normal kernel scheduler more nicely. I think that not needing pinned cores is nice, but we want the ability to pin for performance reasons, especially on NUMA/NUCA systems (NUCA = Non-Uniform Cache Architecture, almost every AMD EPYC above 8 cores, higher core count Intel Xeons for 3 generations, etc).
> > > Monoio and Glommio both serve these use cases (but in slightly different ways!). They both spawn threads and do CPU pinning.
> > > Monoio and Glommio say "tasks will always remain on the local thread". In Rust techie terms: "Futures are !Send and !Sync"
> > > https://docs.rs/monoio/latest/monoio/fn.spawn.html
> > > https://docs.rs/glommio/latest/glommio/fn.spawn_local.html
> >
> > Owen wrote:
> > > There is also another option, one which would eliminate "service cores". We provide both a work stealing pool of tasks that have to deal with being yanked between cores/EAL threads at any time, but aren't data plane tasks, and then a different API for spawning tasks onto the local thread/core for data plane tasks (ex: something to manage a particular HTTP connection). This might make writing the runtime harder, but it should provide the best of both worlds provided we can build in a feature (Rust provides a way to "ifdef out" code via features) to disable one or the other if someone doesn't want the overhead.
> >
> > Hah, yeah.. (as maintainer of service cores!) I'm aware that the "async Rust" cooperative scheduling is very similar.
> > That said, the problem service-cores set out to solve is a very different one to how "async Rust" came about.
> > The implementations, ergonomics, and the language its written in are different too... so they're different beasts!
>
> I think we could still make use of the idea of separate pools of thread local and global tasks.
>
> > We don't want to start writing "dpdk-async-runtime". The goal is not to duplicate everything, we must integrate with existing.
>
> What do you picture someone who picks up "dpdk-rs" seeing as the interface to DPDK when it's fully integrated? Do they enable a feature flag in their async runtime and the runtime handles it for them, do they set up DPDK and start the runtime? Most of the libraries I'm aware of assume the presence of an OS network stack. Yes, there are some like smoltcp which are capable of operating on top of the l2 interface provided by DPDK, but most are going to want a network stack to exist on top of.
DPDK-rs remains DPDK, and the Rust APIs remain at the same level of C APIs.
When I say "integrate with" I mean that DPDK-rs APIs should enable others to build on top of it.
I reference some examples (eg SmolTCP, Tokio etc) because knowledge of how they could consume DPDK gives good context.
I am NOT proposing that DPDK-rs includes more features than DPDK-via-C-API.
DPDK-rs is "just" a safe Rust interface to DPDK functionality.
I am advocating that we understand how things integrate and try support/be-aware of those usages,
primarily to ensure that topics like threading can be resolved well. Yes other libraries expect a TcpListener,
and libraries like SmolTCP (or the DemiKernel Netstack, or FuchsiaOS's netstack3, etc) may provide that bridge.
But DPDK-rs is just DPDK: as first priority, a high-performance L2 ethernet packet I/O library.
Due to Rust language features, we can build in safety via Send/Sync of structs, and nice API design.
To me, that's the goal for a minimal DPDK-rs release.
> > I will try provide some examples of integrating DPDK with other Rust networking projects, to prove that it can be done, and is useful.
> >
> > Harry wrote:
> > > So there are at least 3 different async runtimes (and I haven't even talked about async-std, smol, embassy, ...) which
> > > all have different use-cases, and methods of running "tasks" on threads. These runtimes exist, and are widely used,
> > > and applications make use of their thread-scheduling capabilities.
> > >
> > > So "async runtimes" do thread creation (and optionally CPU pinning) for the user.
> > > Other libraries like "Rayon" are thread-pool managers, those also have various CPU thread-create/pinning capabilities.
> > > If DPDK *also* wants to do thread creation/management and CPU-thread-to-core pinning for the user, that creates tension.
> > > The other problem is that most of these async runtimes have IO very tightly integrated into them. A large portion of Tokio had to be forked and rewritten for io_uring support, and DPDK is a rather stark departure from what they were all designed for. I know that both Tokio and Glommio have "start a new async runtime on this thread" functions, and I think that Tokio has an "add this thread to a multithreaded runtime" somewhere.
> > >
> > > I think the main thing that DPDK would need to be concerned about is that many of these runtimes use thread locals, and I'm not sure if that would be transparently handled by the EAL thread runtime since I've always used thread per core and then used the Rust runtime to multiplex between tasks instead of spawning more EAL threads.
> > >
> > > Rayon should probably be thought of in a similar vein to OpenMP, since it's mainly designed for batch processing. Unless someone is doing some fairly heavy computation (the kind where "do we want a GPU to accelerate this?" becomes a question) inside of their DPDK application, I'm having trouble thinking of a use case that would want both DPDK and Rayon.
> >>
> > > > Bruce wrote: "so having Rust (not DPDK) do all thread management is the way to go (again IMHO)."
> > >
> > > I think I agree here, in order to make the Rust DPDK crate usable from the Rust ecosystem,
> > > it must align itself with the existing Rust networking ecosystem.
> > >
> > > That means, the DPDK Rust crate should not FORCE the usage of lcore pinnings and mappings.
> > > Allowing a Rust application to decide how to best handle threading (via Rayon, Tokio, Monoio, etc)
> > > will allow much more "native" or "ergonomic" integration of DPDK into Rust applications.
> >
> > Owen wrote:
> > > I'm not sure that using DPDK from Rust will be possible without either serious performance sacrifices or rewrites of a lot of the networking libraries. Tokio continues to mimic the BSD sockets API for IO, even with the io_uring version, as does glommio. The idea of the "recv" giving you a buffer without you passing one in isn't really used outside of some lower-level io_uring crates. At a bare minimum, even if DPDK managed to offer an API that works exactly the same ways as io_uring or epoll, we would still need to go to all of the async runtimes and get them to plumb DPDK support in or approve someone from the DPDK community maintaining support. If we don't offer that API, then we either need rewrites inside of the async runtimes or for individual libraries to provide DPDK support, which is going to be even more difficult.
> >
> > Regarding traits used for IO, correct many are focussed on "recv" giving you a buffer, but not all. Look at Monoio, specifically the *Rent APIs:
> > https://docs.rs/monoio/latest/monoio/io/index.html#traits
>
> As far as I can tell, the *Rent APIs for Monoio have the same problem, they require you to pass in a buffer, and to satisfy that API we'd need to throw out zero copy, pass that buffer directly to the PMD, or do some weird thing were we use that API to recycle buffers back into the mempool. I see, in Monoio terms, a DPDK API looking more like TcpStream::read(&mut self) -> impl Future<Output = BufResult<usize, dpdk::PktMbuf>> or some equivalent abstraction on top.
>
> > Owen wrote:
> > > I agree that forcing lcore pinnings and mappings isn't good, but I think that DPDK is well within its rights to build its own async runtime which exposes a standard API. For one thing, the first thing Rust users will ask for is a TCP stack, which the community has been discussing and debating for a long time. I think we should figure out whether the goal is to allow DPDK applications to be written in Rust, or to allow generic Rust applications to use DPDK. The former means that the audience would likely be Rust-fluent people who would have used DPDK regardless, and are fine dealing with mempools, mbufs, the eal, and ethdev configuration. The latter is a much larger audience who is likely going to be less tolerant of dpdk-rs exposing the true complexity of using DPDK. Yes, Rust can help make the abstractions better, but there's an amount of inherent complexity in "Your NIC can handle IPSec for you and can also direct all IPv6 traffic to one core" that I don't think we can remove.
> >
> > Ok, we're getting very far into future/conceptual design here.
> > For me, DPDK having its own async runtime and its own DPDK TCP stack is NOT the goal.
> > We should try to integrate DPDK with existing software environments - not rewrite the world.
>
> Which existing software environments are you thinking of exactly? Most Rust applications that use networking are going to be using Axum, Tower, and the other crates that you've mentioned, and all of those rely on having a TCP stack to be useful. I have found vanishingly few Rust crates which handle integration with DPDK without me editing them to some degree. I'd like to know where you're finding existing Rust software environments which don't care about the presence of a network stack but are still networking oriented. If the goal is to take a DPDK application that would have been written in C/C++ and write it in Rust instead, that is very different than taking an application which would have happily used the OS network stack, such as an HTTP server which deals with normal (<1k RPS) amounts of traffic, and moving it onto DPDK, and it seems to me like you are suggesting that we should focus on the latter.
As above, DPDK-rs is for accelerated packet I/O. Perhaps with some offload features etc in future,
but fundamentally its a high-speed packet I/O library.
Other libraries can build on top, I've done a small (sorry for the pun!) example with SmolTCP,
and integrating DPDK into the "phy" device abstraction: it is not difficult. This provides a route
to TCP with high performance I/O under the hood...
So you mention "HTTP is <1k RPS", that assumption is not correct in all cases.
Use-cases like Next-Gen-FireWall (NGFW) and Reverse-proxy require L7 HTTP processing.
Some even go as far as doing "TLS bumping" (aka MITM inspection; eg internally in a company network).
In these cases, the requirement for L7 HTTP(s) parsing, TLS decrypt/DPI/crypt is huge, with
DPDK levels of performance absolutely being required (or scaling to 100s of boxes doing <1k RPS each!)
I believe the above cases are not easily catered for, because the projects (e.g, Snort, Envoy)
were mostly designed in a pre-DPDK era, and hence expect kernel/FD based I/O. I believe that the lack
of clear C-API abstraction into L7/HTTP layers has stifled some of those projects from consuming DPDK.
So yes, DPDK-rs initially should focus on core priorities: L2 ethernet I/O.
But because the abstractions are more easily ported in Rust, ensuring we don't "design out" these
other use-cases is very important to me - I believe it can expand the potential use-cases for the
core DPDK functionality (Ethdev and the PMDs) a lot.
> > Owen wrote:
> > > I personally think that making an API for DPDK applications to be written in Rust, and then steadily adding abstractions on top of that until we arrive at something that someone who has never looked at a TCP header can use without too much confusion. That was part of the goal of the Iris project I pitched (and then had to go finish another project so the design is still WIP). I think that a move to DPDK is going to be as radical of a change as a move to io_uring, however, DPDK is fast enough that I think it may be possible to convince people to do a rewrite once we arrive at that high level API.
> >
> > I haven't heard of the Iris project you mentioned, is there something concrete to learn from, or is it too WIP to apply?
>
> I have some design docs, but nothing concrete. I got pulled back to another project which is still ongoing shortly after I gave the talk at the last DPDK summit. The main goal of Iris is to provide a DPDK-based alternative to something like a gRPC with a message-based API instead of a byte-based one, and to take advantage of the massive amount of extra breathing room under that new API (as compared to TCP) to plumb in the various accelerators integrated into DPDK alongside a network stack. It's based on observations that many developers aren't even working at a TCP or HTTP level any more, but are instead using "JSON RPC over HTTPS which is automatically converted into objects by their HTTP server framework" or something like gRPC to have a "send message to server" and "get message to server" API. Most of what I have for that is a lot of time spent thinking about a Rust-based API on top of DPDK as a foundation for building the rest of the network stack on top.
Wauw, big project goals; interesting. (Techie note, checkout Zenoh, and check how SmolTCP allocates its rx/tx buffers allocated in hugepages, lots of cool potential here!)
As above, I think DPDK-rs should focus on "Safe L2 packet I/O" for Rust. So while "cool stuff" above, my focus is on a good/safe L2 API first and foremost.
> > Owen wrote:
> > > "Swap out your sockets and rework the functions that do network IO for a 5x performance increase" is a very, very attractive offer, but for us to get there I think we need to have DPDK's full potential available in Rust, and then build as many zero-overhead (zero cost or you couldn't write it better yourself) abstractions as we can on top. I want to avoid a situation where we build up to the high-level APIs as fast as we can and then end up in a situation where you have "Easy Mode" and then "C DPDK written in Rust" as your two options.
> >
> > My perspective is that we're carefully designing "Safe Rust" APIs, and will have "DPDKs full potential" as a result.
> > I'm not sure where the "easy mode" comment applies. But lets focus on code - and making concrete progress - over theoretical discussions.
> >
> > I'll keep my input more consise in future, and try get more patches on list for review.
> > > > Regards,
> > > > Gregory
> > >
> > > Apologies for the long-form, "wall of text" email, but I hope it captures the nuance of threading and
> > > async runtimes, which I believe in the long term will be very nice to capture "async offload" use-cases
> > > for DPDK. To put it another way, lookaside processing can be hidden behind async functions & runtimes,
> > > if we design the APIs right: and that would be really cool for making async-offload code easy to write correctly!
> > >
> > > Regards, -Harry
> > >
> > > Sorry for my own walls of text. As a consequence of working on Iris I've spent a lot of time thinking about how to make DPDK easier to use while keeping the performance intact, and I was already thinking in Rust since it provides one of the better options for these kinds of abstractions (the other option I see is Mojo, which isn't ready yet). I want to see DPDK become more accessible, but the performance and access to hardware is one of the main things that make DPDK special, so I don't want to compromise that. I definitely agree that we need to force DPDK's existing APIs to justify themselves in the face of the new capabilities of Rust, but I think that starting from "How are Rust applications written today?" is a mistake.
> > >
> > > Regards,
> > > Owen
> >
> > Generally agree, but just this line stood out to me:
> > > Owen wrote: I think that starting from "How are Rust applications written today?" is a mistake.
> >
> > We have to understand how applications are written today, in order to understand what it would take to move them to a DPDK backend.
> > In C, consuming DPDK is hard, as applications expect TCP via sockets, and DPDK provides mbuf*s: that's a large mismatch. (Yes I'm aware of various DPDK-aware TCP stacks etc.)
> >
> > In Rust, applications expect a "let tcp_port = TcpListener::bind()", and then to "tcp_port.accept()" incoming requests.
> > Those requirements can be met by: std::net::TcpListener, tokio::net::TcpListener, and in future, some DPDK (SmolTCP?) based TcpListener.
> > - https://doc.rust-lang.org/std/net/struct.TcpListener.html
> > - https://docs.rs/tokio/latest/tokio/net/struct.TcpListener.html
> >
> > The ability to move between abstractions is much easier in Rust. As a result, providing "normal looking APIs" is IMO the best way forward.
>
> Yes, moving between abstractions is easier in Rust, but I think that the abstraction provided by std::net::TcpListener and tokio::net::TcpListener is flawed. I'm not sure there is a good way to provide a "normal" API without fairly serious performance compromises. For example, as I'm sure everyone here is aware, the traditional BSD sockets API requires double the memory bandwidth that a zero-copy one does on the rx path. Those APIs also ignore TLS, meaning that we would actually need to go look at a wrapper over rustls or some other TLS implementation as what users interact with. I can keep going up levels, but this is why I decided to put the highest level of abstraction in Iris, the one I intend most people to interact with at "get this blob of bytes over to that other server as a message, possibly encrypting it, compressing it, doing zero trust checks, etc". I'm not sure if applications expect a TcpListener, so much as an HttpListener, or a JsonRPCListener. I think it would be wise to determine what type of API people would want for a dpdk-rs, rather than making an assumption that they want something like BSD sockets. Even inside of the kernel io_uring has been breaking away from that API with an API that looks a lot more like what I would expect from DPDK, and providing ergonomics benefits to users while doing it.
>
> > Regards, and thanks for the input & discussion. -Harry
>
> Thanks for the discussion, and I hope to continue to work with all of you on this,
> Owen
Thanks, good input! Regards, -Harry
next prev parent reply other threads:[~2025-05-09 16:24 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-17 15:10 Harry van Haaren
2025-04-17 18:58 ` Etelson, Gregory
2025-04-18 11:40 ` Van Haaren, Harry
2025-04-20 8:57 ` Gregory Etelson
2025-04-24 16:06 ` Van Haaren, Harry
2025-04-27 18:50 ` Etelson, Gregory
2025-04-30 18:28 ` Gregory Etelson
2025-05-01 7:44 ` Bruce Richardson
2025-05-02 12:46 ` Etelson, Gregory
2025-05-02 13:58 ` Van Haaren, Harry
2025-05-02 15:41 ` Gregory Etelson
2025-05-02 15:57 ` Bruce Richardson
2025-05-03 17:13 ` Owen Hilyard
2025-05-06 16:39 ` Van Haaren, Harry
2025-05-08 23:53 ` Owen Hilyard
2025-05-09 16:24 ` Van Haaren, Harry [this message]
2025-04-18 13:23 ` [PATCH 1/3] " Harry van Haaren
2025-04-18 13:23 ` [PATCH 2/3] rust: split main into example, refactor to lib.rs Harry van Haaren
2025-04-18 13:23 ` [PATCH 3/3] rust: showcase port Rxq return for stop() and reconfigure Harry van Haaren
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PH8PR11MB6803FD2723BE0ECE7DBB44C8D78AA@PH8PR11MB6803.namprd11.prod.outlook.com \
--to=harry.van.haaren@intel.com \
--cc=Owen.Hilyard@unh.edu \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=getelson@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).