* Inflight value shown invalid in Event Dev Queue @ 2023-07-19 12:39 Hari Haran 2023-07-19 12:58 ` Van Haaren, Harry 0 siblings, 1 reply; 5+ messages in thread From: Hari Haran @ 2023-07-19 12:39 UTC (permalink / raw) To: users [-- Attachment #1: Type: text/plain, Size: 1595 bytes --] Hi All, Once packets dequeued from port 0, still inflight stats shown same as dequeued count. After that, enqueue failure happens for port 2 from another core due to it reaches Max Enqueue depth . *Port 0 Stats:* Below case, port 0 dequeued 4096 packets and still inflight showed the same value. Port 0 rx 0 drop 0 tx 4096 *inflight 4096 * *Full Stats:* Dev=0 Port=1EventDev todo-fix-name: ports 3, qids 1 rx 32768 drop 0 tx 4096 sched calls: 628945658 sched cq/qid call: 628964843 sched no IQ enq: 628926401 sched no CQ enq: 628942982 inflight 32768, credits: 0 *Port 0* rx 0 drop 0 tx 4096 inflight 4096 Max New: 32768 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00% rx ring used: 0 free: 4096 cq ring used: 0 free: 128 Port 1 rx 0 drop 0 tx 0 inflight 0 Max New: 32768 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:100% rx ring used: 0 free: 4096 cq ring used: 0 free: 128 Port 2 rx 32768 drop 0 tx 0 inflight 0 Max New: 32768 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:-nan% rx ring used: 0 free: 4096 cq ring used: 0 free: 128 *Queue 0 (Atomic) rx 32768 drop 0 tx 4096* Per Port Stats: * Port 0: Pkts: 4096 Flows: 1* Port 1: Pkts: 0 Flows: 0 Port 2: Pkts: 0 Flows: 0 Port 3: Pkts: 0 Flows: 0 iq 0: Used 28672 This issue will be resolved once system level restart is done. Kindly give insight on this issue, if you found any clues. TIA. Regards Hariharan [-- Attachment #2: Type: text/html, Size: 2044 bytes --] ^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: Inflight value shown invalid in Event Dev Queue 2023-07-19 12:39 Inflight value shown invalid in Event Dev Queue Hari Haran @ 2023-07-19 12:58 ` Van Haaren, Harry 2023-07-19 15:30 ` Hari Haran 0 siblings, 1 reply; 5+ messages in thread From: Van Haaren, Harry @ 2023-07-19 12:58 UTC (permalink / raw) To: Hari Haran, users > From: Hari Haran <info2hariharan@gmail.com> > Sent: Wednesday, July 19, 2023 1:39 PM > To: users@dpdk.org > Subject: Inflight value shown invalid in Event Dev Queue > > Hi All, Hi Hari Haran, > Once packets dequeued from port 0, still inflight stats shown same as dequeued count. > After that, enqueue failure happens for port 2 from another core due to it reaches Max Enqueue depth . This describes what happens -> it would be helpful to know what you are expecting to happen. Would you describe what each of port 0,1,2 are actually used for, and how events are expected to flow from RX to a Port through a Queue, to another Port, until TX? Describing the expectation and then comparing that to your "problem description" in this email often leads to the root cause & solution. Keep in mind that the event/sw implementation has capacity limitations. It seems to be too high in your configuration (inflight = 32768 is a indicator of an issue, as SW_INFLIGHT_EVENTS_TOTAL is 4096 in sw_evdev.h). > Port 0 Stats: > > Below case, port 0 dequeued 4096 packets and still inflight showed the same value. How are the events being re-enqueued? > Port 0 > rx 0 drop 0 tx 4096 inflight 4096 > > Full Stats: > > Dev=0 Port=1EventDev todo-fix-name: ports 3, qids 1 > rx 32768 > drop 0 > tx 4096 > sched calls: 628945658 > sched cq/qid call: 628964843 > sched no IQ enq: 628926401 > sched no CQ enq: 628942982 > inflight 32768, credits: 0 > > Port 0 > rx 0 drop 0 tx 4096 inflight 4096 > Max New: 32768 Avg cycles PP: 0 Credits: 0 > Receive burst distribution: > 0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00% > rx ring used: 0 free: 4096 > cq ring used: 0 free: 128 > Port 1 > rx 0 drop 0 tx 0 inflight 0 > Max New: 32768 Avg cycles PP: 0 Credits: 0 > Receive burst distribution: > 0:100% > rx ring used: 0 free: 4096 > cq ring used: 0 free: 128 > Port 2 > rx 32768 drop 0 tx 0 inflight 0 > Max New: 32768 Avg cycles PP: 0 Credits: 0 > Receive burst distribution: > 0:-nan% > rx ring used: 0 free: 4096 > cq ring used: 0 free: 128 > > Queue 0 (Atomic) > rx 32768 drop 0 tx 4096 > Per Port Stats: > Port 0: Pkts: 4096 Flows: 1 > Port 1: Pkts: 0 Flows: 0 > Port 2: Pkts: 0 Flows: 0 > Port 3: Pkts: 0 Flows: 0 > iq 0: Used 28672 > This issue will be resolved once system level restart is done. > Kindly give insight on this issue, if you found any clues. > TIA. > > Regards > Hariharan Regards, -Harry van Haaren (PS: our names are surprisingly similar! : ) ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Inflight value shown invalid in Event Dev Queue 2023-07-19 12:58 ` Van Haaren, Harry @ 2023-07-19 15:30 ` Hari Haran 2023-07-19 16:17 ` Van Haaren, Harry 0 siblings, 1 reply; 5+ messages in thread From: Hari Haran @ 2023-07-19 15:30 UTC (permalink / raw) To: Van Haaren, Harry; +Cc: users [-- Attachment #1: Type: text/plain, Size: 5622 bytes --] Hi Harry Haaren (Yes :) ) I have given more details below, please check this. *Device Configuration:* Event Dev Queue : 1 Number of ports : 3 Queue 0 depth - 32k Port 0, 1 amd 2 : Enqueue depth 4096, Dequeue depth 128 *Cores: * Rx core - 1 Workers cores - 2 *Port 2:* Used in Rx core - Used to post packets from Rx core to worker cores using event dev queue . So port 2 used to post packets only. API used: rte_event_enqueue_burst() *Port 0 and 1* linked with Event Dev Q 0 to dequeue the packets. These ports used to dequeue the packets only. Port 0 used in Worker core 1 - Only to receive the packets from Rx core using event dev queue Port 1 used in worker core 2 - Only to receive the packets from Rx core using event dev queue API used: rte_event_dequeue_burst() *Expected behaviour*: Port 2 enqueue packets to event dev Q in Rx core Port 0 and 1 dequeue packets from event dev Q in two workers Event dev scheduler of queue 0, will schedule received packets in port 2 to port 0 and 1. *Problem Description:* Port 0 - only received 4096 packets through event dev Q, after that no packets available for this. API used: rte_event_dequeue_burst() Port 2 - Successfully enqueued 32k packets through event dev Q, after that enqueue failure observed. API used: rte_event_enqueue_burst() Looks like, event dev queue stalled at this point. Also why port 0 stats show inflight as 4096? *Port 0 Stats:* rx 0 drop 0 tx 4096 inflight 4096 *All Stats:* Dev=0 Port=1EventDev todo-fix-name: ports 3, qids 1 rx 32768 drop 0 tx 4096 sched calls: 628945658 sched cq/qid call: 628964843 sched no IQ enq: 628926401 sched no CQ enq: 628942982 inflight 32768, credits: 0 Port 0 rx 0 drop 0 tx 4096 inflight 4096 Max New: 32768 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00% rx ring used: 0 free: 4096 cq ring used: 0 free: 128 Port 1 rx 0 drop 0 tx 0 inflight 0 Max New: 32768 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:100% rx ring used: 0 free: 4096 cq ring used: 0 free: 128 Port 2 rx 32768 drop 0 tx 0 inflight 0 Max New: 32768 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:-nan% rx ring used: 0 free: 4096 cq ring used: 0 free: 128 Queue 0 (Atomic) rx 32768 drop 0 tx 4096 Per Port Stats: Port 0: Pkts: 4096 Flows: 1 Port 1: Pkts: 0 Flows: 0 Port 2: Pkts: 0 Flows: 0 Port 3: Pkts: 0 Flows: 0 iq 0: Used 28672 Regards, Hariharan On Wed, Jul 19, 2023 at 6:30 PM Van Haaren, Harry < harry.van.haaren@intel.com> wrote: > > From: Hari Haran <info2hariharan@gmail.com> > > Sent: Wednesday, July 19, 2023 1:39 PM > > To: users@dpdk.org > > Subject: Inflight value shown invalid in Event Dev Queue > > > > Hi All, > > Hi Hari Haran, > > > Once packets dequeued from port 0, still inflight stats shown same as > dequeued count. > > After that, enqueue failure happens for port 2 from another core due to > it reaches Max Enqueue depth . > > This describes what happens -> it would be helpful to know what you are > expecting to happen. > Would you describe what each of port 0,1,2 are actually used for, and how > events are expected > to flow from RX to a Port through a Queue, to another Port, until TX? > > Describing the expectation and then comparing that to your "problem > description" in this email often > leads to the root cause & solution. > > Keep in mind that the event/sw implementation has capacity limitations. It > seems to be too high in your configuration > (inflight = 32768 is a indicator of an issue, as SW_INFLIGHT_EVENTS_TOTAL > is 4096 in sw_evdev.h). > > > Port 0 Stats: > > > > Below case, port 0 dequeued 4096 packets and still inflight showed the > same value. > > How are the events being re-enqueued? > > > Port 0 > > rx 0 drop 0 tx 4096 inflight 4096 > > > > Full Stats: > > > > Dev=0 Port=1EventDev todo-fix-name: ports 3, qids 1 > > rx 32768 > > drop 0 > > tx 4096 > > sched calls: 628945658 > > sched cq/qid call: 628964843 > > sched no IQ enq: 628926401 > > sched no CQ enq: 628942982 > > inflight 32768, credits: 0 > > > > Port 0 > > rx 0 drop 0 tx 4096 inflight 4096 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > Port 1 > > rx 0 drop 0 tx 0 inflight 0 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:100% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > Port 2 > > rx 32768 drop 0 tx 0 inflight 0 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:-nan% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > > > Queue 0 (Atomic) > > rx 32768 drop 0 tx 4096 > > Per Port Stats: > > Port 0: Pkts: 4096 Flows: 1 > > Port 1: Pkts: 0 Flows: 0 > > Port 2: Pkts: 0 Flows: 0 > > Port 3: Pkts: 0 Flows: 0 > > iq 0: Used 28672 > > This issue will be resolved once system level restart is done. > > Kindly give insight on this issue, if you found any clues. > > TIA. > > > > Regards > > Hariharan > > Regards, -Harry van Haaren (PS: our names are surprisingly similar! : ) > [-- Attachment #2: Type: text/html, Size: 6952 bytes --] ^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: Inflight value shown invalid in Event Dev Queue 2023-07-19 15:30 ` Hari Haran @ 2023-07-19 16:17 ` Van Haaren, Harry 2023-08-29 14:50 ` Hari Haran 0 siblings, 1 reply; 5+ messages in thread From: Van Haaren, Harry @ 2023-07-19 16:17 UTC (permalink / raw) To: Hari Haran; +Cc: users > From: Hari Haran <info2hariharan@gmail.com> > Sent: Wednesday, July 19, 2023 4:30 PM > To: Van Haaren, Harry <harry.van.haaren@intel.com> > Cc: users@dpdk.org > Subject: Re: Inflight value shown invalid in Event Dev Queue > > Hi Harry Haaren (Yes :) ) > > I have given more details below, please check this. Please reply "in-line", it makes it easier to read the conversation for future readers, and gives reference to your replies. > Device Configuration: > Event Dev Queue : 1 > Number of ports : 3 > > Queue 0 depth - 32k > Port 0, 1 amd 2 : Enqueue depth 4096, Dequeue depth 128 > > Cores: > Rx core - 1 > Workers cores - 2 > > Port 2: > Used in Rx core - Used to post packets from Rx core to worker cores using event dev queue . > So port 2 used to post packets only. > API used: rte_event_enqueue_burst() > > Port 0 and 1 linked with Event Dev Q 0 to dequeue the packets. These ports used to dequeue the packets only. > Port 0 used in Worker core 1 - Only to receive the packets from Rx core using event dev queue > Port 1 used in worker core 2 - Only to receive the packets from Rx core using event dev queue > API used: rte_event_dequeue_burst() > > Expected behaviour: > > Port 2 enqueue packets to event dev Q in Rx core > Port 0 and 1 dequeue packets from event dev Q in two workers > > Event dev scheduler of queue 0, will schedule received packets in port 2 to port 0 and 1. > > > Problem Description: > > Port 0 - only received 4096 packets through event dev Q, after that no packets available for this. > API used: rte_event_dequeue_burst() > > Port 2 - Successfully enqueued 32k packets through event dev Q, after that enqueue failure observed. > API used: rte_event_enqueue_burst() > Looks like, event dev queue stalled at this point. > > Also why port 0 stats show inflight as 4096? This seems to be the problem - are you returning the events to Eventdev? Or calling the rte_event_dequeue_burst() API again (the "implicit releases" default value will automatically "complete" the events on the next dequeue() call, making the "inflights" go down, and allowing the Eventdev to make forward progress. Please ensure that new events are enqueued with "NEW" type, And that the worker cores are forwarding events with "FWD" type. This ensures that the RX/producer core is back-pressured first, and that worker cores (who enqueue FWD type events) can make progress as there is still space in the device. Typically, setting a "new_event_threshold" on the producer port (https://doc.dpdk.org/api/structrte__event__port__conf.html#a70bebdfb5211f97b81b46ff08594ddda) of 50% of the total capacity is a good starting point. The ideal new % amount depends on the workload itself, and how often one NEW event turns into N NEW events.. > Port 0 Stats: > rx 0 drop 0 tx 4096 inflight 4096 > > All Stats: > Dev=0 Port=1EventDev todo-fix-name: ports 3, qids 1 > rx 32768 > drop 0 > tx 4096 > sched calls: 628945658 > sched cq/qid call: 628964843 > sched no IQ enq: 628926401 > sched no CQ enq: 628942982 > inflight 32768, credits: 0 > > > Port 0 > rx 0 drop 0 tx 4096 inflight 4096 > Max New: 32768 Avg cycles PP: 0 Credits: 0 > Receive burst distribution: > 0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00% > rx ring used: 0 free: 4096 > cq ring used: 0 free: 128 > Port 1 > rx 0 drop 0 tx 0 inflight 0 > Max New: 32768 Avg cycles PP: 0 Credits: 0 > Receive burst distribution: > 0:100% > rx ring used: 0 free: 4096 > cq ring used: 0 free: 128 > Port 2 > rx 32768 drop 0 tx 0 inflight 0 > Max New: 32768 Avg cycles PP: 0 Credits: 0 > Receive burst distribution: > 0:-nan% > rx ring used: 0 free: 4096 > cq ring used: 0 free: 128 > > Queue 0 (Atomic) > rx 32768 drop 0 tx 4096 > Per Port Stats: > Port 0: Pkts: 4096 Flows: 1 > Port 1: Pkts: 0 Flows: 0 > Port 2: Pkts: 0 Flows: 0 > Port 3: Pkts: 0 Flows: 0 > iq 0: Used 28672 > > Regards, > Hariharan Regards, -Harry <snip below older parts of conversation> ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Inflight value shown invalid in Event Dev Queue 2023-07-19 16:17 ` Van Haaren, Harry @ 2023-08-29 14:50 ` Hari Haran 0 siblings, 0 replies; 5+ messages in thread From: Hari Haran @ 2023-08-29 14:50 UTC (permalink / raw) To: Van Haaren, Harry; +Cc: users [-- Attachment #1: Type: text/plain, Size: 5227 bytes --] Hi Harry, Thanks for your valuable time and response for the query. Long struggling issue has been fixed now. Issue was resolved after proper initialization of 'disable_implicit_release' variable during event dev initialization. In our case, we are sending packets from Rx core to worker core using event dev queue and expect implicit release of event after worker core dequeued the packets from event dev queue. After initialization of 'disable_implicit_release' variable, now event dev queue released the inflight packets after the dequeue happened in the worker. With a wide range of testing, now all cases are working fine. Regards, Hariharan On Wed, Jul 19, 2023 at 9:49 PM Van Haaren, Harry < harry.van.haaren@intel.com> wrote: > > From: Hari Haran <info2hariharan@gmail.com> > > Sent: Wednesday, July 19, 2023 4:30 PM > > To: Van Haaren, Harry <harry.van.haaren@intel.com> > > Cc: users@dpdk.org > > Subject: Re: Inflight value shown invalid in Event Dev Queue > > > > Hi Harry Haaren (Yes :) ) > > > > I have given more details below, please check this. > > Please reply "in-line", it makes it easier to read the conversation for > future readers, and gives reference to your replies. > > > Device Configuration: > > Event Dev Queue : 1 > > Number of ports : 3 > > > > Queue 0 depth - 32k > > Port 0, 1 amd 2 : Enqueue depth 4096, Dequeue depth 128 > > > > Cores: > > Rx core - 1 > > Workers cores - 2 > > > > Port 2: > > Used in Rx core - Used to post packets from Rx core to worker cores > using event dev queue . > > So port 2 used to post packets only. > > API used: rte_event_enqueue_burst() > > > > Port 0 and 1 linked with Event Dev Q 0 to dequeue the packets. These > ports used to dequeue the packets only. > > Port 0 used in Worker core 1 - Only to receive the packets from Rx core > using event dev queue > > Port 1 used in worker core 2 - Only to receive the packets from Rx core > using event dev queue > > API used: rte_event_dequeue_burst() > > > > Expected behaviour: > > > > Port 2 enqueue packets to event dev Q in Rx core > > Port 0 and 1 dequeue packets from event dev Q in two workers > > > > Event dev scheduler of queue 0, will schedule received packets in port 2 > to port 0 and 1. > > > > > > Problem Description: > > > > Port 0 - only received 4096 packets through event dev Q, after that no > packets available for this. > > API used: rte_event_dequeue_burst() > > > > Port 2 - Successfully enqueued 32k packets through event dev Q, after > that enqueue failure observed. > > API used: rte_event_enqueue_burst() > > Looks like, event dev queue stalled at this point. > > > > Also why port 0 stats show inflight as 4096? > > This seems to be the problem - are you returning the events to Eventdev? > Or calling the rte_event_dequeue_burst() API again (the "implicit > releases" default value will automatically "complete" the events on the > next dequeue() call, making the "inflights" go down, and allowing the > Eventdev to make forward progress. > > Please ensure that new events are enqueued with "NEW" type, > And that the worker cores are forwarding events with "FWD" type. > > This ensures that the RX/producer core is back-pressured first, and that > worker cores (who enqueue FWD type events) can make progress as there is > still space in the device. > Typically, setting a "new_event_threshold" on the producer port ( > https://doc.dpdk.org/api/structrte__event__port__conf.html#a70bebdfb5211f97b81b46ff08594ddda) > of 50% of the total capacity is a good starting point. The ideal new % > amount depends on the workload itself, and how often one NEW event turns > into N NEW events.. > > > Port 0 Stats: > > rx 0 drop 0 tx 4096 inflight 4096 > > > > All Stats: > > Dev=0 Port=1EventDev todo-fix-name: ports 3, qids 1 > > rx 32768 > > drop 0 > > tx 4096 > > sched calls: 628945658 > > sched cq/qid call: 628964843 > > sched no IQ enq: 628926401 > > sched no CQ enq: 628942982 > > inflight 32768, credits: 0 > > > > > > Port 0 > > rx 0 drop 0 tx 4096 inflight 4096 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > Port 1 > > rx 0 drop 0 tx 0 inflight 0 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:100% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > Port 2 > > rx 32768 drop 0 tx 0 inflight 0 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:-nan% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > > > Queue 0 (Atomic) > > rx 32768 drop 0 tx 4096 > > Per Port Stats: > > Port 0: Pkts: 4096 Flows: 1 > > Port 1: Pkts: 0 Flows: 0 > > Port 2: Pkts: 0 Flows: 0 > > Port 3: Pkts: 0 Flows: 0 > > iq 0: Used 28672 > > > > Regards, > > Hariharan > > Regards, -Harry > > <snip below older parts of conversation> > [-- Attachment #2: Type: text/html, Size: 7560 bytes --] ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-08-29 14:51 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-07-19 12:39 Inflight value shown invalid in Event Dev Queue Hari Haran 2023-07-19 12:58 ` Van Haaren, Harry 2023-07-19 15:30 ` Hari Haran 2023-07-19 16:17 ` Van Haaren, Harry 2023-08-29 14:50 ` Hari Haran
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).