From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C26E45B5C; Thu, 17 Oct 2024 11:05:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE51540279; Thu, 17 Oct 2024 11:05:36 +0200 (CEST) Received: from mail-lj1-f176.google.com (mail-lj1-f176.google.com [209.85.208.176]) by mails.dpdk.org (Postfix) with ESMTP id D689740273 for ; Thu, 17 Oct 2024 10:49:05 +0200 (CEST) Received: by mail-lj1-f176.google.com with SMTP id 38308e7fff4ca-2fb5f647538so7172671fa.0 for ; Thu, 17 Oct 2024 01:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729154945; x=1729759745; darn=dpdk.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=2f2wFLxqTC0PrPSgbrHLnTPVBzShFzmiiXaKpIdloe8=; b=dPfWjL0NI23Ur/WVLi2JoZ/rm72D2AEllQBMVXj0kOqulFlTFAbho7sp+YHmMRl7H0 YvCe3yIFs8QaZh2M2IY5dSu4GZLicN4MZWZ1YbDmCer3iukFY2wato2s5jKe0Acf9YPW HPuZx7WoxKyLmUNfrkD7DosaHWpbgJMcl4eyx013rfcF4kgUaTVsJgvDd6wbcZayEDi9 3FuAkDujpFAu7MvMK5zM+zjIhk6ckKVT6Uh2csxaEZh05/tRDDLcydv3Fgut0vnZJHMb BQ7FVuQFsbC0zkQkIn1Ioyvy+XF92Y4b8WuzDAjVATlH0D8JSolizJZi05n2KxqFJ3e7 LPXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729154945; x=1729759745; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2f2wFLxqTC0PrPSgbrHLnTPVBzShFzmiiXaKpIdloe8=; b=lY3yQir87CUr5MxeBIH9waIGTCJBj0Xq9yejqBHzsC9MEqUiOYl55FbwmEsDi4R16s w6pf3C+LeHQoMZGLJVd/iXGUf+/1FJEnqD3NR14fyMhyTsd0d4naTL+3du8SoSAS+Aur /gYaQBMlOhTixBiwe08xsSxOHJnDik7kCFazb1NmPC8KqTfd4r+yCJsqNj6FfoFEE9eE DSqJflG3dUzAB3lf4SlrG+idIUSM82FFYiUJlESx8IOx5uR19nr1uaNhmsOL7KkWfZP2 TcDbgneEtr8Q7BMharJImoJk8rU0iJQwfI7C+nqjPeW8y2leNNHJi5/LbsMTmJz46TBc rMZg== X-Forwarded-Encrypted: i=1; AJvYcCV5deL0E8nrprJewRL1dFz7XukLQhOXxoiNSIemA5zKFiQq3XpkINbqS+oPQcEFR562gW0=@dpdk.org X-Gm-Message-State: AOJu0YzXyGXaW5nQoDNOc31EJnVp6jaMQWKDy2PFdCNYXTDgvEf5fLTD TTMGSCyodpwq7VYGHUXwWfwQIuMUO2ZBTBtMg8hlaw+hAgZv4OWvIT4vY5PzGDvKdTyU2VWVprG Qwiy5sQZXOG/toJMgOcznRtuwgUo= X-Google-Smtp-Source: AGHT+IEK15X8OFaF+3p8dzSt4cmxx1XXFabHMrsYQmNx7XEKyfPNjYcewKS+ATTm2HfIKfPnyty0h5yeTvYbk4C67GM= X-Received: by 2002:a2e:1309:0:b0:2fb:4b6c:4ef5 with SMTP id 38308e7fff4ca-2fb6dc8361fmr5414701fa.24.1729154944773; Thu, 17 Oct 2024 01:49:04 -0700 (PDT) MIME-Version: 1.0 References: <20240907073115.1531070-1-nsaxena@marvell.com> In-Reply-To: From: Nitin Saxena Date: Thu, 17 Oct 2024 14:18:53 +0530 Message-ID: Subject: Re: [EXTERNAL] Re: [RFC PATCH 0/3] add feature arc in rte_graph To: Robin Jarry Cc: David Marchand , Nitin Saxena , Jerin Jacob , Kiran Kumar Kokkilagadda , Nithin Kumar Dabilpuram , Zhirun Yan , "dev@dpdk.org" , Christophe Fontaine Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Mailman-Approved-At: Thu, 17 Oct 2024 11:05:35 +0200 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Robin, See inline comments Thanks, Nitin On Thu, Oct 17, 2024 at 1:20=E2=80=AFPM Robin Jarry wro= te: > > Hi Nitin, all, > > Nitin Saxena, Oct 17, 2024 at 09:03: > > Hi Robin/David and all, > > > > We realized the feature arc patch series is difficult to understand as > > a new concept. Our objectives are following with feature arc changes > > > > 1. Allow reusability of standard DPDK nodes (defined in lib/nodes/*) > > with out-of-tree applications (like grout). Currently out-of-tree > > graph applications are duplicating standard nodes but not reusing > > the standard ones which are available. In the long term, we would > > like to mature standard DPDK nodes with flexibility of hooking them > > to out-of-tree application nodes. > > It would be ideal if the in-built nodes could be reused. When we started > working on grout, I tried multiple approaches where I could reuse these > nodes, but all failed. The nodes public API seems tailored for app/graph > but does not fit well with other control plane implementations. > > One of the main issues I had is that the ethdev_rx and ethdev_tx nodes > are cloned per rxq / txq associated with a graph worker. The rte_node > API requires that every clone has a unique name. This in turn makes hot > plugging of DPDK ports very complex, if not impossible. Agreed. I guess hot plugging of DPDK ports was not the objective when initial changes went in. But we can add hot-plugging functionality without affecting performance > > For example, with the in-built nodes, it is not possible to change the > number of ports or their number of RX queues without destroying the > whole graph and creating a new one from scratch. Coincidentally, I have also encountered these technical issues while writing an out-of-tree application [1]. I had internal discussions with @Jerin Jacob and other graph maintainers to fix these shortcomings. If you want, we can collaborate on fixing these issues For [port, rq] pair mapping to worker core, I have an alternate design [2] which currently stops worker cores. It can be enhanced by RCU based scheme for an ideal DPDK implementation [1]: https://marvellembeddedprocessors.github.io/dao/guides/applications/se= cgw-graph.html [2]: https://github.com/MarvellEmbeddedProcessors/dao/blob/dao-devel/app/se= cgw-graph/nodes/rxtx/ethdev-rx.c#L27 > > Also, the current implementation of "ip{4,6}-rewrite" handles writing > ethernet header data. This would prevent it from using this node for an > IP-in-IP tunnel interface as we did in grout. For IP-in-IP, a separate rewrite node would be required which computes checksum etc. but not add rewrite data. > > Do you think we could change the in-built nodes to enforce OSI layer > separation of concerns? It would make them much more flexible. Yes. We are also in agreement to make RFC compliant optimized in-built nodes with such flexibility in place. > It may > cause a slight drop of performance because you'd be splitting processing > in two different nodes. But I think flexibility is more important. > Otherwise, the in-built nodes can only be used for very specific > use-cases. > > Finally, I would like to improve the rte_node API to allow defining and > enforcing per-packet metadata that every node expects as input. The > current in-built nodes rely on mbuf dynamic fields for this but this > means you only have 9x32 bits available. And using all of these may > break some drivers (ixgbe) that rely on dynfields to work. Have you > considered using mbuf private data for this? IMO, "node_mbuf_priv_t" would be ideal for most of the use-cases as it fits in second 64B cache line. With mbuf private data, fast path have to access another cache line per packet which may not be efficient from performance PoV. But we can discuss in more detail about it. Although, I thought of adding "sw_if_index" (which is not same as port_id) to accommodate IP-in-IP like software interfaces > > > > > 2. Flexibility to enable/disable sub-graphs per interface based on the > > runtime configuration updates. Protocol sub-graphs can be > > selectively enabled for few (or all interfaces) at runtime > > > > 3. More than one sub-graphs/features can be enabled on an interface. > > So a packet has to follow a sequential ordering node path on worker > > cores. Packets may need to move from one sub-graph to another > > sub-graph per interface > > > > 4. Last but not least, an optimized implementation which does not (or > > minimally) stop worker cores for any control plane runtime updates. > > Any performance regression should also be avoided > > > > I am planning to create a draft presentation on feature arc which > > I can share, when ready, to discuss. If needed, I can also plan to > > present that in one of the DPDK community meetings. Their we can also > > discuss if there are any alternatives of achieving above objectives > > Looking forward to this. Sure. Will share ppt asap > > Thanks! >