From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C649746BF1 for ; Wed, 23 Jul 2025 14:42:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5A40D402E7; Wed, 23 Jul 2025 14:42:22 +0200 (CEST) Received: from agw.arknetworks.am (agw.arknetworks.am [79.141.165.80]) by mails.dpdk.org (Postfix) with ESMTP id 60C88402E1 for ; Wed, 23 Jul 2025 14:42:21 +0200 (CEST) Received: from debian (unknown [78.109.70.60]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by agw.arknetworks.am (Postfix) with ESMTPSA id 97286E0568; Wed, 23 Jul 2025 16:42:20 +0400 (+04) DKIM-Filter: OpenDKIM Filter v2.11.0 agw.arknetworks.am 97286E0568 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arknetworks.am; s=default; t=1753274541; bh=l4VQrfcMzp1rC+hMbuMOoEd/5DwpRmYnTS/6V9D3v8I=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=PoOzINXw35dtsALLJSVtZhE1s6ZWGroyCNuypEI/8ZkUD7EECiznZUQ/H6Er9Argx QBS4GpmaEsdYlW1TLhijTdUF1GRdp7Mp0gUH2xuA518ai3fFG+w7BVT07ByRCY/i5d iTogKqqXlo0JxAf64bMe2LAsnHLlUN8M98AIqrlW2vpfqKz4jna8QSxIpQTBR2ROmw eaHR7oX0VZ9dUuwm3OfYrQrLs5DVwSk3Hydm02q7z5+WYq8bpCHLh/535Dnjgy12vF R2iQo7tEzFqkKsoUQMd9NNg4kk7T+h8uYA8Q5tun3rM00lP9gm5yCnGHtCstSfmeX+ 2VCj94louVEcQ== Date: Wed, 23 Jul 2025 16:42:19 +0400 (+04) From: Ivan Malov To: Tom Barbette cc: Stephen Hemminger , Scott Wasson , "users@dpdk.org" Subject: Re: rte_eth_dev_rss_reta_update() locking considerations? In-Reply-To: Message-ID: <0ab2a459-2400-d06a-cbde-f1253988d2ab@arknetworks.am> References: <20250715144015.7b2504d9@hermes.local> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="8323328-1383650218-1753274541=:17737" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --8323328-1383650218-1753274541=:17737 Content-Type: text/plain; charset=Windows-1252; format=flowed Content-Transfer-Encoding: 8BIT Hi Tom, On Wed, 23 Jul 2025, Tom Barbette wrote: > > Hi all, > >   > > As Ivan mentioned, this is exactly what we did in RSS++. > >   > > For the concern about RSS reprogramming in « live », it depends on the NIC. I remember the Intel card we used could use the “global” API just fine. For the Mellanox cards we had to use > the rte_flow RSS action as reprogramming the global RETA table would lead to a (partial ?) device restart and would lead to the loss of many packets. We had to play with priority and Valid point, indeed. So some drivers, just like with the MTU update in started state, may need internal port restart. Thanks for clarifying this. > prefixes, but > > rte_flow and mlx5 support has evolved since then, it might be a bit simpler, just using priorities and groups maybe. > >   > > The biggest challenge was the state, as written in the paper. We ended up with using the rte_flow rules anyway so we can use an epoch “mark” action that marks the version of the > distribution table and allow an efficient passing of the state of flows going from one core to another. > > The code of RSS++ is still coupled a bit to FastClick, but it was mostly separated already here : https://github.com/tbarbette/fastclick/tree/main/vendor/nicscheduler > > We also had a version for the Linux Kernel with XDP for counting. > >   > > We can chat about that if you want. > >   > > NB : my address has changed, I’m not at kth anymore. I apologise for confusing it. Found it at the top of https://github.com/rsspp . Thank you. > >   > > Cheers, > > Tom > >   > >   > > De : Stephen Hemminger > Date : mardi, 15 juillet 2025 à 23:40 > À : Scott Wasson > Cc : users@dpdk.org > Objet : Re: rte_eth_dev_rss_reta_update() locking considerations? > > On Tue, 15 Jul 2025 16:15:22 +0000 > Scott Wasson wrote: > > > Hi, > > > > We're using multiqueue, and RSS doesn't always balance the load very well.  I had a clever idea to periodically measure the load distribution (cpu load on the IO cores)  in the > background pthread, and use rte_eth_dev_rss_reta_update() to adjust the redirection table dynamically if the imbalance exceeds a given threshold.  In practice it seems to work nicely.   > But I'm concerned about: > > > >https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc.dpdk.org%2Fapi%2Frte__ethdev_8h.html%23a3c1540852c9cf1e576a883902c2e310d&data=05%7C02%7Ctom.barbette%40uclouvain.be > %7Cebeee334aef74a19446308ddc3e83545%7C7ab090d4fa2e4ecfbc7c4127b4d582ec%7C1%7C0%7C638882124267510617%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIs > IkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=BopyVlMOW0CGCdLDk9Q%2BLf87r81NOzCG%2Bv4w4rMezDI%3D&reserved=0 > > > > Which states: > > > > By default, all the functions of the Ethernet Device API exported by a PMD are lock-free functions which assume to not be invoked in parallel on different logical cores to work on the > same target object. For instance, the receive function of a PMD cannot be invoked in parallel on two logical cores to poll the same Rx queue [of the same port]. Of course, this function > can be invoked in parallel by different logical cores on different Rx queues. It is the responsibility of the upper level application to enforce this rule. > > > > In this context, what is the "target object"?  The queue_id of the port?  Or the port itself?  Would I need to add port-level spinlocks around every invocation of rte_eth_dev_*()?  > That's a hard no, it would destroy performance. > > > > Alternatively, if I were to periodically call rte_eth_dev_rss_reta_update() from the IO cores instead of the background core, as the above paragraph suggests, that doesn't seem correct > either.  The function takes a reta_conf[] array that affects all RETA entries for that port and maps them to a queue_id.  Is it safe to remap RETA entries for a given port on one IO core > while another IO core is potentially reading from its rx queue for that same port?  That problem seems not much different from remapping in the background core as I am now. > > > > I'm starting to suspect this function was intended to be initialized once on startup before rte_eth_dev_start(), and/or the ports must be stopped before calling it.  If that's the > case, then I'll call this idea too clever by half and give it up now. > > > > Thanks in advance for your help! > > > > -Scott > > > > There is no locking in driver path for control. > It is expected that application will manage access to control path (RSS being one example) > so that only one thread modifies the PMD. > > > --8323328-1383650218-1753274541=:17737--