DPDK usage discussions
 help / color / mirror / Atom feed
* Relation between DPDK queue and descriptors
@ 2024-10-02 15:21 Mikael R Carlsson
  2024-10-02 15:29 ` Stephen Hemminger
  0 siblings, 1 reply; 7+ messages in thread
From: Mikael R Carlsson @ 2024-10-02 15:21 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 495 bytes --]

Hi experts!

I have a hard time to finds a good documentation about the relation between DPDK TX queue and descriptors.

Queue as in rte_eth_tx_queue_setup
Descriptor as in rte_eth_dev_adjust_nb_rx_rx_desc

We suspect we run out of descriptors in TX path, we are not sure here. We use more than one TX queue. Will we get more descriptors if we only use one single TX queue? Does anyone know if there is some good documentation regarding the TX queue and the descriptors?

  / Mikael


[-- Attachment #2: Type: text/html, Size: 2385 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Relation between DPDK queue and descriptors
  2024-10-02 15:21 Relation between DPDK queue and descriptors Mikael R Carlsson
@ 2024-10-02 15:29 ` Stephen Hemminger
  2024-10-02 16:04   ` Mikael R Carlsson
  0 siblings, 1 reply; 7+ messages in thread
From: Stephen Hemminger @ 2024-10-02 15:29 UTC (permalink / raw)
  To: Mikael R Carlsson; +Cc: users

On Wed, 2 Oct 2024 15:21:45 +0000
Mikael R Carlsson <mikael.r.carlsson@tietoevry.com> wrote:

> Hi experts!
> 
> I have a hard time to finds a good documentation about the relation between DPDK TX queue and descriptors.
> 
> Queue as in rte_eth_tx_queue_setup
> Descriptor as in rte_eth_dev_adjust_nb_rx_rx_desc
> 
> We suspect we run out of descriptors in TX path, we are not sure here. We use more than one TX queue. Will we get more descriptors if we only use one single TX queue? Does anyone know if there is some good documentation regarding the TX queue and the descriptors?
> 
>   / Mikael
> 

A typical driver has a hardware ring buffer between the driver and the hardware.
One ring for transmit, and another for receive.
The entries in the ring are hardware specific data structure called descriptors.
Each descriptor usually has physical memory address, size, and flags.

The number of Rx descriptors determines the number of unread frames the
driver can hold. Too small, and you risk dropping packets; too large and
under stress load the driver can end up buffering excessively causing latency (bufferbloat).
Similar on Tx but less of a problem because typically the network is faster
than the application can send packets.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: Relation between DPDK queue and descriptors
  2024-10-02 15:29 ` Stephen Hemminger
@ 2024-10-02 16:04   ` Mikael R Carlsson
  2024-10-02 16:07     ` Pathak, Pravin
  0 siblings, 1 reply; 7+ messages in thread
From: Mikael R Carlsson @ 2024-10-02 16:04 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: users

Hi!

Thanks for the response. 

I think I get the descriptor part, but what is the relation to queues? If the hardware supports 1024 descriptors and I need 6 queues, do I have 1024 descriptors on each TX queue? 

  / Mikael


-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org> 
Sent: Wednesday, October 2, 2024 5:29 PM
To: Mikael R Carlsson <mikael.r.carlsson@tietoevry.com>
Cc: users@dpdk.org
Subject: Re: Relation between DPDK queue and descriptors

On Wed, 2 Oct 2024 15:21:45 +0000
Mikael R Carlsson <mikael.r.carlsson@tietoevry.com> wrote:

> Hi experts!
> 
> I have a hard time to finds a good documentation about the relation between DPDK TX queue and descriptors.
> 
> Queue as in rte_eth_tx_queue_setup
> Descriptor as in rte_eth_dev_adjust_nb_rx_rx_desc
> 
> We suspect we run out of descriptors in TX path, we are not sure here. We use more than one TX queue. Will we get more descriptors if we only use one single TX queue? Does anyone know if there is some good documentation regarding the TX queue and the descriptors?
> 
>   / Mikael
> 

A typical driver has a hardware ring buffer between the driver and the hardware.
One ring for transmit, and another for receive.
The entries in the ring are hardware specific data structure called descriptors.
Each descriptor usually has physical memory address, size, and flags.

The number of Rx descriptors determines the number of unread frames the driver can hold. Too small, and you risk dropping packets; too large and under stress load the driver can end up buffering excessively causing latency (bufferbloat).
Similar on Tx but less of a problem because typically the network is faster than the application can send packets.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: Relation between DPDK queue and descriptors
  2024-10-02 16:04   ` Mikael R Carlsson
@ 2024-10-02 16:07     ` Pathak, Pravin
  2024-10-03  7:37       ` Mikael R Carlsson
  0 siblings, 1 reply; 7+ messages in thread
From: Pathak, Pravin @ 2024-10-02 16:07 UTC (permalink / raw)
  To: Mikael R Carlsson, Stephen Hemminger; +Cc: users

Hi Mikael -
ChatGpt provides a good description of the relation between these two. If you ask ChatGPT to do a deep dive, it will provide a good bit of programming and optimization details.
Regards
Pravin


> -----Original Message-----
> From: Mikael R Carlsson <mikael.r.carlsson@tietoevry.com>
> Sent: Wednesday, October 2, 2024 12:05 PM
> To: Stephen Hemminger <stephen@networkplumber.org>
> Cc: users@dpdk.org
> Subject: RE: Relation between DPDK queue and descriptors
> 
> Hi!
> 
> Thanks for the response.
> 
> I think I get the descriptor part, but what is the relation to queues? If the
> hardware supports 1024 descriptors and I need 6 queues, do I have 1024
> descriptors on each TX queue?
> 
>   / Mikael
> 
> 
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Wednesday, October 2, 2024 5:29 PM
> To: Mikael R Carlsson <mikael.r.carlsson@tietoevry.com>
> Cc: users@dpdk.org
> Subject: Re: Relation between DPDK queue and descriptors
> 
> On Wed, 2 Oct 2024 15:21:45 +0000
> Mikael R Carlsson <mikael.r.carlsson@tietoevry.com> wrote:
> 
> > Hi experts!
> >
> > I have a hard time to finds a good documentation about the relation
> between DPDK TX queue and descriptors.
> >
> > Queue as in rte_eth_tx_queue_setup
> > Descriptor as in rte_eth_dev_adjust_nb_rx_rx_desc
> >
> > We suspect we run out of descriptors in TX path, we are not sure here. We
> use more than one TX queue. Will we get more descriptors if we only use one
> single TX queue? Does anyone know if there is some good documentation
> regarding the TX queue and the descriptors?
> >
> >   / Mikael
> >
> 
> A typical driver has a hardware ring buffer between the driver and the
> hardware.
> One ring for transmit, and another for receive.
> The entries in the ring are hardware specific data structure called descriptors.
> Each descriptor usually has physical memory address, size, and flags.
> 
> The number of Rx descriptors determines the number of unread frames the
> driver can hold. Too small, and you risk dropping packets; too large and under
> stress load the driver can end up buffering excessively causing latency
> (bufferbloat).
> Similar on Tx but less of a problem because typically the network is faster than
> the application can send packets.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: Relation between DPDK queue and descriptors
  2024-10-02 16:07     ` Pathak, Pravin
@ 2024-10-03  7:37       ` Mikael R Carlsson
  2024-10-03  8:34         ` Dmitry Kozlyuk
  0 siblings, 1 reply; 7+ messages in thread
From: Mikael R Carlsson @ 2024-10-03  7:37 UTC (permalink / raw)
  To: Pathak, Pravin, Stephen Hemminger; +Cc: users

Hi!

Thanks. 

According to chatgpt the descriptors are shared over all TX queues. 

So, in a 4 TX queue and 1024 descriptors scenario I would be able to get maximum 256 descriptors per TX queue (If I want same amount on all queues). But if I only used 1 TX queue, I would get all 1024 descriptors on that single TX queue.

  / Mikael




-----Original Message-----
From: Pathak, Pravin <pravin.pathak@intel.com> 
Sent: Wednesday, October 2, 2024 6:07 PM
To: Mikael R Carlsson <mikael.r.carlsson@tietoevry.com>; Stephen Hemminger <stephen@networkplumber.org>
Cc: users@dpdk.org
Subject: RE: Relation between DPDK queue and descriptors

[Du f?r inte e-post ofta fr?n pravin.pathak@intel.com. L?s om varf?r det h?r ?r viktigt p? https://aka.ms/LearnAboutSenderIdentification ]

Hi Mikael -
ChatGpt provides a good description of the relation between these two. If you ask ChatGPT to do a deep dive, it will provide a good bit of programming and optimization details.
Regards
Pravin


> -----Original Message-----
> From: Mikael R Carlsson <mikael.r.carlsson@tietoevry.com>
> Sent: Wednesday, October 2, 2024 12:05 PM
> To: Stephen Hemminger <stephen@networkplumber.org>
> Cc: users@dpdk.org
> Subject: RE: Relation between DPDK queue and descriptors
>
> Hi!
>
> Thanks for the response.
>
> I think I get the descriptor part, but what is the relation to queues? 
> If the hardware supports 1024 descriptors and I need 6 queues, do I 
> have 1024 descriptors on each TX queue?
>
>   / Mikael
>
>
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Wednesday, October 2, 2024 5:29 PM
> To: Mikael R Carlsson <mikael.r.carlsson@tietoevry.com>
> Cc: users@dpdk.org
> Subject: Re: Relation between DPDK queue and descriptors
>
> On Wed, 2 Oct 2024 15:21:45 +0000
> Mikael R Carlsson <mikael.r.carlsson@tietoevry.com> wrote:
>
> > Hi experts!
> >
> > I have a hard time to finds a good documentation about the relation
> between DPDK TX queue and descriptors.
> >
> > Queue as in rte_eth_tx_queue_setup
> > Descriptor as in rte_eth_dev_adjust_nb_rx_rx_desc
> >
> > We suspect we run out of descriptors in TX path, we are not sure 
> > here. We
> use more than one TX queue. Will we get more descriptors if we only 
> use one single TX queue? Does anyone know if there is some good 
> documentation regarding the TX queue and the descriptors?
> >
> >   / Mikael
> >
>
> A typical driver has a hardware ring buffer between the driver and the 
> hardware.
> One ring for transmit, and another for receive.
> The entries in the ring are hardware specific data structure called descriptors.
> Each descriptor usually has physical memory address, size, and flags.
>
> The number of Rx descriptors determines the number of unread frames 
> the driver can hold. Too small, and you risk dropping packets; too 
> large and under stress load the driver can end up buffering 
> excessively causing latency (bufferbloat).
> Similar on Tx but less of a problem because typically the network is 
> faster than the application can send packets.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Relation between DPDK queue and descriptors
  2024-10-03  7:37       ` Mikael R Carlsson
@ 2024-10-03  8:34         ` Dmitry Kozlyuk
  2024-10-03 15:20           ` Stephen Hemminger
  0 siblings, 1 reply; 7+ messages in thread
From: Dmitry Kozlyuk @ 2024-10-03  8:34 UTC (permalink / raw)
  To: Mikael R Carlsson; +Cc: Pathak, Pravin, Stephen Hemminger, users

2024-10-03 07:37 (UTC+0000), Mikael R Carlsson:
> Hi!
> 
> Thanks. 
> 
> According to chatgpt the descriptors are shared over all TX queues. 
> 
> So, in a 4 TX queue and 1024 descriptors scenario I would be able to get maximum 256 descriptors per TX queue (If I want same amount on all queues). But if I only used 1 TX queue, I would get all 1024 descriptors on that single TX queue.

This is not so, ChatGPT errs.
The number of advertised and configured descriptors is per queue
(the latter is per the specific queue being configured, actually).
You won't get more descriptors per queue if you use fewer queues.
Note, however, that queues consume NIC resources and larger queues stress
CPU cache, so it is not always the best to have many queues or large queues.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Relation between DPDK queue and descriptors
  2024-10-03  8:34         ` Dmitry Kozlyuk
@ 2024-10-03 15:20           ` Stephen Hemminger
  0 siblings, 0 replies; 7+ messages in thread
From: Stephen Hemminger @ 2024-10-03 15:20 UTC (permalink / raw)
  To: Dmitry Kozlyuk; +Cc: Mikael R Carlsson, Pathak, Pravin, users

On Thu, 3 Oct 2024 11:34:36 +0300
Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:

> 2024-10-03 07:37 (UTC+0000), Mikael R Carlsson:
> > Hi!
> > 
> > Thanks. 
> > 
> > According to chatgpt the descriptors are shared over all TX queues. 
> > 
> > So, in a 4 TX queue and 1024 descriptors scenario I would be able to get maximum 256 descriptors per TX queue (If I want same amount on all queues). But if I only used 1 TX queue, I would get all 1024 descriptors on that single TX queue.  
> 
> This is not so, ChatGPT errs.
> The number of advertised and configured descriptors is per queue
> (the latter is per the specific queue being configured, actually).
> You won't get more descriptors per queue if you use fewer queues.
> Note, however, that queues consume NIC resources and larger queues stress
> CPU cache, so it is not always the best to have many queues or large queues.

Also, large numbers of descriptors means the mbuf pool much be larger which can exhaust
the available huge page memory.  If you had 4 Tx queue * 1024 descriptors per queue
the Tx side could consume as much as 4K * 2K  = 8 Meg of hugepage memory.
And on Rx side same applies.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-10-03 15:20 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-02 15:21 Relation between DPDK queue and descriptors Mikael R Carlsson
2024-10-02 15:29 ` Stephen Hemminger
2024-10-02 16:04   ` Mikael R Carlsson
2024-10-02 16:07     ` Pathak, Pravin
2024-10-03  7:37       ` Mikael R Carlsson
2024-10-03  8:34         ` Dmitry Kozlyuk
2024-10-03 15:20           ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).