DPDK usage discussions
 help / color / mirror / Atom feed
* rte_eth_tx_burst return 0 after some time
@ 2024-09-09 13:25 Alan Arondel
  2024-09-09 16:39 ` Stephen Hemminger
  0 siblings, 1 reply; 2+ messages in thread
From: Alan Arondel @ 2024-09-09 13:25 UTC (permalink / raw)
  To: users

Hello Everyone,

I'm trying to add an export module to my application. I use DPDK
version 20.11, a 82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb
and the vfio-pci driver.
port configuration is the following:

port_conf->txmode.mq_mode = ETH_MQ_TX_NONE;
port_conf->txmode.offloads = DEV_TX_OFFLOAD_MULTI_SEGS;

I use only 1 queue on the card and the configuration is the following :
    nb_desc : 4096
    rs_threshold : 32
    free_threshold : 32

I have an issue where at some point during the lifetime of the
application, the rte_eth_tx_burst return only 0 and nothing can be
done once it is stuck after several seconds of the beginning of
transmission.

I have checked couple of things :
- i still get burst of packets to send
- i still have lot of mbuf in my mempool
- i use t-rex to generate traffic to forward and one of them have lot
of small packet and i can support up to 2,8Mpps (roughly 7,8Gbps)
sadly with another traffic i can't go up more than 200Kpps at best
(roughly 2Gbps if not less)

once the application is stuck, I tried to stop the traffic and restart
it with a very low bandwidth like 40Mbps but it changes nothing

here is a code snippet of what i'm trying to do :

nb_rx = rte_ring_sc_dequeue_burst(od_h->rx_rings[i], (void **) pkts_burst,

OUTPUT_DPDK_BURST_SIZE, NULL);

for (j = 0; j < nb_rx; j++) {

  if (!rte_pktmbuf_is_contiguous(pkts_burst[j])) {
      stats->jumbo_frame ++; /* i got some jumbo frame in both traffic
which i want to track, mbuf size is 1512 + RTE_PKTMBUF_HEADROOM */
   }

   ret = rte_eth_tx_burst(port_id, 0, &pkts_burst[j], 1); /* i know
this is weird but i have another issue with the order of the packets
when sending more than 1 packet even if the mbuf inside pkts_burst are
ordered correctly */
        if (unlikely(! ret)) {
              /* packet not sent, just drop it */
              rte_pktmbuf_free(pkts_burst[j]);
              stats->freed_mbuf ++;

               int flushed = rte_eth_tx_done_cleanup(port_id, 0, 0);
/* this is my last test since i saw we could flush the mbuf stuck in
the cache of the driver */
               if (flushed >= 0) {
                    stats->mbuf_flushed += flushed; /* i can see some
of them but not moving once the application do not send anymore
packets */
                } else {
                   stats->mbuf_flushed = flushed;
                }
          } else {
                stats->nb_sent ++;
                 sent = 1;
          }
     }
  i ++;
}

As far as I understand, the logic of the code seems to be ok since it
works on another type of traffic.
However i have no clue why it is stuck at some point. If you got some
insight to dig further on this i'm all ears.
Also, I would like to understand the free_threshold. As far as i
understand, mbuf will "stay" in the driver into there is not enough
descriptor available regarding the threshold hence the mbuf return to
the mempool. Is that right ?

Thank you for any help i will have on this.

Regards,
Alan.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-09-09 16:39 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-09 13:25 rte_eth_tx_burst return 0 after some time Alan Arondel
2024-09-09 16:39 ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).