From: Bing Zhao <bingz@nvidia.com>
To: "Ivan Malov" <ivan.malov@arknetworks.am>,
배성종 <sjbae1999@gmail.com>, "Erez Shitrit" <erezsh@nvidia.com>
Cc: "users@dpdk.org" <users@dpdk.org>,
Dariusz Sosnowski <dsosnowski@nvidia.com>,
Slava Ovsiienko <viacheslavo@nvidia.com>,
Ori Kam <orika@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>
Subject: RE: [DPDK 24.11.3-rc1] rte_flow_async_create() stucks in while loop (infinite loop)
Date: Tue, 12 Aug 2025 08:30:49 +0000 [thread overview]
Message-ID: <IA4PR12MB97631E5446B1B77DF9C71877D02BA@IA4PR12MB9763.namprd12.prod.outlook.com> (raw)
In-Reply-To: <9c57e90b-4216-cc15-6ff3-b8ed8cd322d5@arknetworks.am>
@Ivan Malov, which version of DPDK are you using? The last year RC?
@Erez Shitrit, could you help to confirm if the GCC loop expansion bug of some arm compiler is also present in this branch?
I remember there was a GCC bug to always compare with 1 and jump into an infinite loop.
Thanks
> -----Original Message-----
> From: Ivan Malov <ivan.malov@arknetworks.am>
> Sent: Tuesday, August 12, 2025 12:09 AM
> To: 배성종 <sjbae1999@gmail.com>
> Cc: users@dpdk.org; Dariusz Sosnowski <dsosnowski@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; Bing Zhao <bingz@nvidia.com>; Ori Kam
> <orika@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> <matan@nvidia.com>
> Subject: Re: [DPDK 24.11.3-rc1] rte_flow_async_create() stucks in while
> loop (infinite loop)
>
> External email: Use caution opening links or attachments
>
>
> Hi,
>
> On Mon, 28 Jul 2025, 배성종 wrote:
>
> > Hello commit authors (and maintainers),
> >
> > I'm currently working with rte_flow_async_create() using the postpone
> > flag, along with rte_flow_push/pull() for batching, in a scenario
> involving thousands of flows on a BlueField-2 system.
> >
> > My goal is to implement hardware steering such that ingress traffic
> bypasses the ARM core of the BF2, and egress traffic does the same.
> >
> > According to the DPDK documentation, rte_flow_push/pull() seems to be
> > intended for use as a batch operation, wrapping a large for loop that
> issues multiple flow operations, and then committing them to hardware in
> one go.
> >
> > However, I’ve observed that when multiple cores simultaneously insert
> > flow rules, using rte_flow_push/pull() in such a batched way can result
> in the rule insertion operations not being properly transmitted to the
> hardware. Specifically, the internal function mlx5dr_send_all_dep_wqe()
> ends up getting stuck in its while loop.
> >
> > Interestingly, if I call rte_flow_push/pull() after each individual
> > rte_flow_async_create() operation, even though that usage seems contrary
> to the intended batching model, the infinite loop issue is significantly
> mitigated. The frequency of getting stuck in mlx5dr_send_all_dep_wqe()
> drops drastically—though it still occurs occasionally.
> >
> > In summary, calling rte_flow_push/pull() after each
> rte_flow_async_create() seems to avoid the infinite loop, but I’m unsure
> if this is an expected usage pattern. I would like to ask:
> >
> > *
> >
> > Is this behavior intentional?
> >
> > *
> >
> > Am I misunderstanding the design or usage expectations for
> rte_flow_push/pull() in multi-core scenarios?
> >
>
> Perhaps my question is a bit out of place and wrong, but, given the fact
> there are no code snippets to take a look at, are you using separate flow
> queues for submitting the operations, one flow queue per lcore?
>
> Thank you.
>
> > Thank you for your time and support.
> >
> > Sincerely,
> > Seongjong Bae M.S. Student T-Networking Lab.
> > [AIorK4yCWXBmHrQ1GGSZ1Kc18irHfB1S9x_FqTeAHsxNIdnf_olG-PRjFVlItUw34zr1t
> > nNwkP5AlPTomK87]
> > Email
> > sjbae1999@gmail.com
> > Mobile
> > (+82)01089640524
> > Web.
> > https://tnet.snu.ac.kr/
> > [a81b6766e3d5b6518dc4010493c7772f5a46f598.png?u=11013800]
> >
> >
prev parent reply other threads:[~2025-08-12 8:30 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-28 10:49 배성종
2025-08-11 16:08 ` Ivan Malov
2025-08-12 8:30 ` Bing Zhao [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=IA4PR12MB97631E5446B1B77DF9C71877D02BA@IA4PR12MB9763.namprd12.prod.outlook.com \
--to=bingz@nvidia.com \
--cc=dsosnowski@nvidia.com \
--cc=erezsh@nvidia.com \
--cc=ivan.malov@arknetworks.am \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=sjbae1999@gmail.com \
--cc=suanmingm@nvidia.com \
--cc=users@dpdk.org \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).