DPDK usage discussions
 help / color / mirror / Atom feed
From: Hari Haran <info2hariharan@gmail.com>
To: "Van Haaren, Harry" <harry.van.haaren@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: Inflight value shown invalid in Event Dev Queue
Date: Tue, 29 Aug 2023 20:20:51 +0530	[thread overview]
Message-ID: <CAPbxCtqxgxBUs79c-juogdGaoQxqhocU+GgsQTf4HiaQOH=E-w@mail.gmail.com> (raw)
In-Reply-To: <PH8PR11MB680309128FBFF1D676A1571FD739A@PH8PR11MB6803.namprd11.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 5227 bytes --]

Hi Harry,

Thanks for your valuable time and response for the query. Long struggling
issue has been fixed now.



Issue was resolved after proper initialization of 'disable_implicit_release'
variable during event dev initialization.



In our case, we are sending packets from Rx core to worker core using event
dev queue and expect implicit release of event after worker core dequeued
the packets from event dev queue. After initialization of
'disable_implicit_release'
variable, now event dev queue released the inflight packets after the
dequeue happened in the worker.



With a wide range of testing, now all cases are working fine.


Regards,

Hariharan

On Wed, Jul 19, 2023 at 9:49 PM Van Haaren, Harry <
harry.van.haaren@intel.com> wrote:

> > From: Hari Haran <info2hariharan@gmail.com>
> > Sent: Wednesday, July 19, 2023 4:30 PM
> > To: Van Haaren, Harry <harry.van.haaren@intel.com>
> > Cc: users@dpdk.org
> > Subject: Re: Inflight value shown invalid in Event Dev Queue
> >
> > Hi Harry Haaren (Yes :) )
> >
> > I have given more details below, please check this.
>
> Please reply "in-line", it makes it easier to read the conversation for
> future readers, and gives reference to your replies.
>
> > Device Configuration:
> > Event Dev Queue : 1
> > Number of ports : 3
> >
> > Queue 0 depth - 32k
> > Port 0, 1 amd 2 : Enqueue depth 4096, Dequeue depth 128
> >
> > Cores:
> > Rx core - 1
> > Workers cores - 2
> >
> > Port 2:
> > Used in Rx core - Used to post packets from Rx core to worker cores
> using event dev queue .
> > So port 2 used to post packets only.
> > API used: rte_event_enqueue_burst()
> >
> > Port 0 and 1 linked with Event Dev Q 0 to dequeue the packets. These
> ports used to dequeue the packets only.
> > Port 0 used in Worker core 1 - Only to receive the packets from Rx core
> using event dev queue
> > Port 1 used in worker core 2 - Only to receive the packets from Rx core
> using event dev queue
> > API used: rte_event_dequeue_burst()
> >
> > Expected behaviour:
> >
> > Port 2 enqueue packets to event dev Q in Rx core
> > Port 0 and 1 dequeue packets from event dev Q in two workers
> >
> > Event dev scheduler of queue 0, will schedule received packets in port 2
> to port 0 and 1.
> >
> >
> > Problem Description:
> >
> > Port 0 - only received 4096 packets through event dev Q, after that no
> packets available for this.
> > API used: rte_event_dequeue_burst()
> >
> > Port 2 - Successfully enqueued 32k packets through event dev Q, after
> that enqueue failure observed.
> > API used: rte_event_enqueue_burst()
> > Looks like, event dev queue stalled at this point.
> >
> > Also why port 0 stats show inflight as 4096?
>
> This seems to be the problem - are you returning the events to Eventdev?
> Or calling the rte_event_dequeue_burst() API again (the "implicit
> releases" default value will automatically "complete" the events on the
> next dequeue() call, making the "inflights" go down, and allowing the
> Eventdev to make forward progress.
>
> Please ensure that new events are enqueued with "NEW" type,
> And that the worker cores are forwarding events with "FWD" type.
>
> This ensures that the RX/producer core is back-pressured first, and that
> worker cores (who enqueue FWD type events) can make progress as there is
> still space in the device.
> Typically, setting a "new_event_threshold" on the producer port (
> https://doc.dpdk.org/api/structrte__event__port__conf.html#a70bebdfb5211f97b81b46ff08594ddda)
> of 50% of the total capacity is a good starting point. The ideal new %
> amount depends on the workload itself, and how often one NEW event turns
> into N NEW events..
>
> > Port 0 Stats:
> >   rx   0  drop 0  tx   4096   inflight 4096
> >
> > All Stats:
> > Dev=0 Port=1EventDev todo-fix-name: ports 3, qids 1
> > rx   32768
> > drop 0
> > tx   4096
> > sched calls: 628945658
> > sched cq/qid call: 628964843
> > sched no IQ enq: 628926401
> > sched no CQ enq: 628942982
> > inflight 32768, credits: 0
> >
> >
> > Port 0
> >   rx   0  drop 0  tx   4096   inflight 4096
> >   Max New: 32768  Avg cycles PP: 0    Credits: 0
> >   Receive burst distribution:
> >       0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00%
> >   rx ring used:    0 free: 4096
> >   cq ring used:    0 free:  128
> > Port 1
> >   rx   0  drop 0  tx   0  inflight 0
> >   Max New: 32768  Avg cycles PP: 0    Credits: 0
> >   Receive burst distribution:
> >       0:100%
> >   rx ring used:    0 free: 4096
> >   cq ring used:    0 free:  128
> > Port 2
> >   rx   32768  drop 0  tx   0  inflight 0
> >   Max New: 32768  Avg cycles PP: 0    Credits: 0
> >   Receive burst distribution:
> >       0:-nan%
> >   rx ring used:    0 free: 4096
> >   cq ring used:    0 free:  128
> >
> > Queue 0 (Atomic)
> >   rx   32768  drop 0  tx   4096
> >   Per Port Stats:
> >     Port 0: Pkts: 4096    Flows: 1
> >     Port 1: Pkts: 0   Flows: 0
> >     Port 2: Pkts: 0   Flows: 0
> >     Port 3: Pkts: 0   Flows: 0
> >   iq 0: Used 28672
> >
> > Regards,
> > Hariharan
>
> Regards, -Harry
>
> <snip below older parts of conversation>
>

[-- Attachment #2: Type: text/html, Size: 7560 bytes --]

      reply	other threads:[~2023-08-29 14:51 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-19 12:39 Hari Haran
2023-07-19 12:58 ` Van Haaren, Harry
2023-07-19 15:30   ` Hari Haran
2023-07-19 16:17     ` Van Haaren, Harry
2023-08-29 14:50       ` Hari Haran [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPbxCtqxgxBUs79c-juogdGaoQxqhocU+GgsQTf4HiaQOH=E-w@mail.gmail.com' \
    --to=info2hariharan@gmail.com \
    --cc=harry.van.haaren@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).