DPDK usage discussions
 help / color / mirror / Atom feed
From: 배성종 <sjbae1999@gmail.com>
To: Bing Zhao <bingz@nvidia.com>
Cc: Ivan Malov <ivan.malov@arknetworks.am>,
	Erez Shitrit <erezsh@nvidia.com>,
	 "users@dpdk.org" <users@dpdk.org>,
	Dariusz Sosnowski <dsosnowski@nvidia.com>,
	 Slava Ovsiienko <viacheslavo@nvidia.com>,
	Ori Kam <orika@nvidia.com>,  Suanming Mou <suanmingm@nvidia.com>,
	Matan Azrad <matan@nvidia.com>
Subject: Re: [DPDK 24.11.3-rc1] rte_flow_async_create() stucks in while loop (infinite loop)
Date: Wed, 13 Aug 2025 12:45:42 +0900	[thread overview]
Message-ID: <CAMFKeQ2H8GxUAatHY0PEbhCQSFtG7vt5U730P22GHX9RqGw7ag@mail.gmail.com> (raw)
In-Reply-To: <IA4PR12MB97631E5446B1B77DF9C71877D02BA@IA4PR12MB9763.namprd12.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 3801 bytes --]

Hello,

@Ivan Malov, I use one flow queue per lcore.

Sincerely,
*Seongjong Bae *M.S. Student T-Networking Lab.

*Email.* sjbae1999@gmail.com
*Mobile.* (+82)01089640524
*Web.* https://tnet.snu.ac.kr/


2025년 8월 12일 (화) 오후 5:30, Bing Zhao <bingz@nvidia.com>님이 작성:

> @Ivan Malov, which version of DPDK are you using? The last year RC?
>
> @Erez Shitrit, could you help to confirm if the GCC loop expansion bug of
> some arm compiler is also present in this branch?
> I remember there was a GCC bug to always compare with 1 and jump into an
> infinite loop.
>
> Thanks
>
> > -----Original Message-----
> > From: Ivan Malov <ivan.malov@arknetworks.am>
> > Sent: Tuesday, August 12, 2025 12:09 AM
> > To: 배성종 <sjbae1999@gmail.com>
> > Cc: users@dpdk.org; Dariusz Sosnowski <dsosnowski@nvidia.com>; Slava
> > Ovsiienko <viacheslavo@nvidia.com>; Bing Zhao <bingz@nvidia.com>; Ori
> Kam
> > <orika@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> > <matan@nvidia.com>
> > Subject: Re: [DPDK 24.11.3-rc1] rte_flow_async_create() stucks in while
> > loop (infinite loop)
> >
> > External email: Use caution opening links or attachments
> >
> >
> > Hi,
> >
> > On Mon, 28 Jul 2025, 배성종 wrote:
> >
> > > Hello commit authors (and maintainers),
> > >
> > > I'm currently working with rte_flow_async_create() using the postpone
> > > flag, along with rte_flow_push/pull() for batching, in a scenario
> > involving thousands of flows on a BlueField-2 system.
> > >
> > > My goal is to implement hardware steering such that ingress traffic
> > bypasses the ARM core of the BF2, and egress traffic does the same.
> > >
> > > According to the DPDK documentation, rte_flow_push/pull() seems to be
> > > intended for use as a batch operation, wrapping a large for loop that
> > issues multiple flow operations, and then committing them to hardware in
> > one go.
> > >
> > > However, I’ve observed that when multiple cores simultaneously insert
> > > flow rules, using rte_flow_push/pull() in such a batched way can result
> > in the rule insertion operations not being properly transmitted to the
> > hardware. Specifically, the internal function mlx5dr_send_all_dep_wqe()
> > ends up getting stuck in its while loop.
> > >
> > > Interestingly, if I call rte_flow_push/pull() after each individual
> > > rte_flow_async_create() operation, even though that usage seems
> contrary
> > to the intended batching model, the infinite loop issue is significantly
> > mitigated. The frequency of getting stuck in mlx5dr_send_all_dep_wqe()
> > drops drastically—though it still occurs occasionally.
> > >
> > > In summary, calling rte_flow_push/pull() after each
> > rte_flow_async_create() seems to avoid the infinite loop, but I’m unsure
> > if this is an expected usage pattern. I would like to ask:
> > >
> > >  *
> > >
> > >     Is this behavior intentional?
> > >
> > >  *
> > >
> > >     Am I misunderstanding the design or usage expectations for
> > rte_flow_push/pull() in multi-core scenarios?
> > >
> >
> > Perhaps my question is a bit out of place and wrong, but, given the fact
> > there are no code snippets to take a look at, are you using separate flow
> > queues for submitting the operations, one flow queue per lcore?
> >
> > Thank you.
> >
> > > Thank you for your time and support.
> > >
> > > Sincerely,
> > > Seongjong Bae M.S. Student T-Networking Lab.
> > > [AIorK4yCWXBmHrQ1GGSZ1Kc18irHfB1S9x_FqTeAHsxNIdnf_olG-PRjFVlItUw34zr1t
> > > nNwkP5AlPTomK87]
> > > Email
> > > sjbae1999@gmail.com
> > > Mobile
> > > (+82)01089640524
> > > Web.
> > > https://tnet.snu.ac.kr/
> > > [a81b6766e3d5b6518dc4010493c7772f5a46f598.png?u=11013800]
> > >
> > >
>

[-- Attachment #2: Type: text/html, Size: 8082 bytes --]

      reply	other threads:[~2025-08-18  7:13 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-28 10:49 배성종
2025-08-11 16:08 ` Ivan Malov
2025-08-12  8:30   ` Bing Zhao
2025-08-13  3:45     ` 배성종 [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMFKeQ2H8GxUAatHY0PEbhCQSFtG7vt5U730P22GHX9RqGw7ag@mail.gmail.com \
    --to=sjbae1999@gmail.com \
    --cc=bingz@nvidia.com \
    --cc=dsosnowski@nvidia.com \
    --cc=erezsh@nvidia.com \
    --cc=ivan.malov@arknetworks.am \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=users@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).