Hi all, The DPDK QoS scheduler has a 4-stage pipeline for enqueuing the packets. This is used for hiding the latency of prefetching the data structures. Why is there no pipeline for dequeuing the packets? How does the dequeue function maintain the state of a packet? In other words, if I want to backtrace the packet that is dequeued to get the info of what was the traffic class and from which queue the packet was dequeued. Is there any way to get this. Regards -- Avinash
> -----Original Message----- > From: dev <dev-bounces@dpdk.org> On Behalf Of Avinash . > Sent: Wednesday, February 26, 2020 10:46 AM > To: dev@dpdk.org > Cc: Gokul Bargaje <gokulbargaje.182009@nitk.edu.in>; Mohit P. Tahiliani > <tahiliani@nitk.edu.in> > Subject: [dpdk-dev] DPDK Enqueue Pipeline > > Hi all, > The DPDK QoS scheduler has a 4-stage pipeline for enqueuing the packets. > This is used for hiding the latency of prefetching the data structures. > Why is there no pipeline for dequeuing the packets? The dequeue operation happens in stages that involves scanning bitmap (that contains the active pipes info), followed by extracting information about TC and queue of the pipe that will be considered for scheduling. As a result, the information about pipe TC and queue become available but not exposed to the application as this is internal to dequeue operation. > How does the dequeue function maintain the state of a packet? In other > words, if I want to backtrace the packet that is dequeued to get the info of > what was the traffic class and from which queue the packet was dequeued. > Is there any way to get this. > The QoS dequeue operation is built around multiple grinders (packet crunching engines) that works on different pipes at a time. Each grinder follows the state machine (please have a look at the documentation) which contains multiple stages starting from pipe bitmap scanning, tc and queue detection, fetching the packet from the queue based on the available credits, etc, all this information is kept in internal data structure of the scheduler, and that info can be used to understand the flow or debug the code. why you want to expose that intermediate info through public API during execution? > Regards > > -- > Avinash
Thank you for the clarification. I was trying to get the queue and traffic
class information from where the packet is going to be dequeued. Its
resolved now.
Regards
Avinash
On Fri, Feb 28, 2020 at 7:09 PM Singh, Jasvinder <jasvinder.singh@intel.com>
wrote:
>
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Avinash .
> > Sent: Wednesday, February 26, 2020 10:46 AM
> > To: dev@dpdk.org
> > Cc: Gokul Bargaje <gokulbargaje.182009@nitk.edu.in>; Mohit P. Tahiliani
> > <tahiliani@nitk.edu.in>
> > Subject: [dpdk-dev] DPDK Enqueue Pipeline
> >
> > Hi all,
> > The DPDK QoS scheduler has a 4-stage pipeline for enqueuing the packets.
> > This is used for hiding the latency of prefetching the data structures.
> > Why is there no pipeline for dequeuing the packets?
>
> The dequeue operation happens in stages that involves scanning bitmap
> (that contains the active pipes info), followed by extracting information
> about TC and queue of the pipe that will be considered for scheduling. As a
> result, the information about pipe TC and queue become
> available but not exposed to the application as this is internal to
> dequeue operation.
>
> > How does the dequeue function maintain the state of a packet? In other
> > words, if I want to backtrace the packet that is dequeued to get the
> info of
> > what was the traffic class and from which queue the packet was dequeued.
> > Is there any way to get this.
> >
> The QoS dequeue operation is built around multiple grinders (packet
> crunching engines) that works on different pipes at a time. Each grinder
> follows the state machine (please have a look at the documentation) which
> contains multiple stages starting from pipe bitmap scanning, tc and queue
> detection, fetching the packet from the queue based on the available
> credits, etc, all this information is kept in internal data structure of
> the scheduler, and that info can be used to understand the flow or debug
> the code. why you want to expose that intermediate info through public API
> during execution?
>
>
> > Regards
> >
> > --
> > Avinash
>